Science.gov

Sample records for randomness stochastic algorithms

  1. Tuning of an optimal fuzzy PID controller with stochastic algorithms for networked control systems with random time delay.

    PubMed

    Pan, Indranil; Das, Saptarshi; Gupta, Amitava

    2011-01-01

    An optimal PID and an optimal fuzzy PID have been tuned by minimizing the Integral of Time multiplied Absolute Error (ITAE) and squared controller output for a networked control system (NCS). The tuning is attempted for a higher order and a time delay system using two stochastic algorithms viz. the Genetic Algorithm (GA) and two variants of Particle Swarm Optimization (PSO) and the closed loop performances are compared. The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers.

  2. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  3. Segmentation of stochastic images with a stochastic random walker method.

    PubMed

    Pätz, Torben; Preusser, Tobias

    2012-05-01

    We present an extension of the random walker segmentation to images with uncertain gray values. Such gray-value uncertainty may result from noise or other imaging artifacts or more general from measurement errors in the image acquisition process. The purpose is to quantify the influence of the gray-value uncertainty onto the result when using random walker segmentation. In random walker segmentation, a weighted graph is built from the image, where the edge weights depend on the image gradient between the pixels. For given seed regions, the probability is evaluated for a random walk on this graph starting at a pixel to end in one of the seed regions. Here, we extend this method to images with uncertain gray values. To this end, we consider the pixel values to be random variables (RVs), thus introducing the notion of stochastic images. We end up with stochastic weights for the graph in random walker segmentation and a stochastic partial differential equation (PDE) that has to be solved. We discretize the RVs and the stochastic PDE by the method of generalized polynomial chaos, combining the recent developments in numerical methods for the discretization of stochastic PDEs and an interactive segmentation algorithm. The resulting algorithm allows for the detection of regions where the segmentation result is highly influenced by the uncertain pixel values. Thus, it gives a reliability estimate for the resulting segmentation, and it furthermore allows determining the probability density function of the segmented object volume.

  4. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  5. Enhanced algorithms for stochastic programming

    SciTech Connect

    Krishna, A.S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.

  6. Algorithm refinement for the stochastic Burgers' equation

    SciTech Connect

    Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org

    2007-04-10

    In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.

  7. FITTING NONLINEAR ORDINARY DIFFERENTIAL EQUATION MODELS WITH RANDOM EFFECTS AND UNKNOWN INITIAL CONDITIONS USING THE STOCHASTIC APPROXIMATION EXPECTATION–MAXIMIZATION (SAEM) ALGORITHM

    PubMed Central

    Chow, Sy- Miin; Lu, Zhaohua; Zhu, Hongtu; Sherwood, Andrew

    2014-01-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation–maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456

  8. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    PubMed

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  9. A Stochastic Collocation Algorithm for Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.

  10. Stochastic structure formation in random media

    NASA Astrophysics Data System (ADS)

    Klyatskin, V. I.

    2016-01-01

    Stochastic structure formation in random media is considered using examples of elementary dynamical systems related to the two-dimensional geophysical fluid dynamics (Gaussian random fields) and to stochastically excited dynamical systems described by partial differential equations (lognormal random fields). In the latter case, spatial structures (clusters) may form with a probability of one in almost every system realization due to rare events happening with vanishing probability. Problems involving stochastic parametric excitation occur in fluid dynamics, magnetohydrodynamics, plasma physics, astrophysics, and radiophysics. A more complicated stochastic problem dealing with anomalous structures on the sea surface (rogue waves) is also considered, where the random Gaussian generation of sea surface roughness is accompanied by parametric excitation.

  11. Bootstrap performance profiles in stochastic algorithms assessment

    SciTech Connect

    Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro

    2015-03-10

    Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.

  12. An exact accelerated stochastic simulation algorithm.

    PubMed

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-04-14

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 23 power of the number of reaction events in a Galton-Watson process.

  13. Linear-scaling and parallelisable algorithms for stochastic quantum chemistry

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Smart, Simon D.; Alavi, Ali

    2014-07-01

    For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.

  14. Algorithm Refinement for Stochastic Partial Differential Equations. I. Linear Diffusion

    NASA Astrophysics Data System (ADS)

    Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.

    2002-10-01

    A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. Results from a variety of numerical experiments are presented for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a nonstochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except in particle regions away from the interface. Extensions of the methodology to fluid mechanics applications are discussed.

  15. An exact accelerated stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-04-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2/3 power of the number of reaction events in a Galton-Watson process.

  16. An exact accelerated stochastic simulation algorithm

    PubMed Central

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-01-01

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present “ER-leap” algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 2∕3 power of the number of reaction events in a Galton–Watson process. PMID:19368432

  17. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  18. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  19. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  20. A hierarchical exact accelerated stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Orendorff, David; Mjolsness, Eric

    2012-12-01

    A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.

  1. Perspective: Stochastic algorithms for chemical kinetics

    PubMed Central

    Gillespie, Daniel T.; Hellander, Andreas; Petzold, Linda R.

    2013-01-01

    We outline our perspective on stochastic chemical kinetics, paying particular attention to numerical simulation algorithms. We first focus on dilute, well-mixed systems, whose description using ordinary differential equations has served as the basis for traditional chemical kinetics for the past 150 years. For such systems, we review the physical and mathematical rationale for a discrete-stochastic approach, and for the approximations that need to be made in order to regain the traditional continuous-deterministic description. We next take note of some of the more promising strategies for dealing stochastically with stiff systems, rare events, and sensitivity analysis. Finally, we review some recent efforts to adapt and extend the discrete-stochastic approach to systems that are not well-mixed. In that currently developing area, we focus mainly on the strategy of subdividing the system into well-mixed subvolumes, and then simulating diffusional transfers of reactant molecules between adjacent subvolumes together with chemical reactions inside the subvolumes. PMID:23656106

  2. Stochastic algorithms for Markov models estimation with intermittent missing data.

    PubMed

    Deltour, I; Richardson, S; Le Hesran, J Y

    1999-06-01

    Multistate Markov models are frequently used to characterize disease processes, but their estimation from longitudinal data is often hampered by complex patterns of incompleteness. Two algorithms for estimating Markov chain models in the case of intermittent missing data in longitudinal studies, a stochastic EM algorithm and the Gibbs sampler, are described. The first can be viewed as a random perturbation of the EM algorithm and is appropriate when the M step is straightforward but the E step is computationally burdensome. It leads to a good approximation of the maximum likelihood estimates. The Gibbs sampler is used for a full Bayesian inference. The performances of the two algorithms are illustrated on two simulated data sets. A motivating example concerned with the modelling of the evolution of parasitemia by Plasmodium falciparum (malaria) in a cohort of 105 young children in Cameroon is described and briefly analyzed.

  3. Random-order fractional bistable system and its stochastic resonance

    NASA Astrophysics Data System (ADS)

    Gao, Shilong; Zhang, Li; Liu, Hui; Kan, Bixia

    2017-01-01

    In this paper, the diffusion motion of Brownian particles in a viscous liquid suffering from stochastic fluctuations of the external environment is modeled as a random-order fractional bistable equation, and as a typical nonlinear dynamic behavior, the stochastic resonance phenomena in this system are investigated. At first, the derivation process of the random-order fractional bistable system is given. In particular, the random-power-law memory is deeply discussed to obtain the physical interpretation of the random-order fractional derivative. Secondly, the stochastic resonance evoked by random-order and external periodic force is mainly studied by numerical simulation. In particular, the frequency shifting phenomena of the periodical output are observed in SR induced by the excitation of the random order. Finally, the stochastic resonance of the system under the double stochastic excitations of the random order and the internal color noise is also investigated.

  4. Constant-complexity stochastic simulation algorithm with optimal binning

    SciTech Connect

    Sanft, Kevin R.; Othmer, Hans G.

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  5. Constant-complexity stochastic simulation algorithm with optimal binning.

    PubMed

    Sanft, Kevin R; Othmer, Hans G

    2015-08-21

    At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie's Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.

  6. The multinomial simulation algorithm for discrete stochastic simulation of reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Lampoudi, Sotiria; Gillespie, Dan T.; Petzold, Linda R.

    2009-03-01

    The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.

  7. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  8. Multiscale stochastic simulation algorithm with stochastic partial equilibrium assumption for chemically reacting systems

    SciTech Connect

    Cao Yang . E-mail: ycao@cs.ucsb.edu; Gillespie, Dan . E-mail: GillespieDT@mailaps.org; Petzold, Linda . E-mail: petzold@engineering.ucsb.edu

    2005-07-01

    In this paper, we introduce a multiscale stochastic simulation algorithm (MSSA) which makes use of Gillespie's stochastic simulation algorithm (SSA) together with a new stochastic formulation of the partial equilibrium assumption (PEA). This method is much more efficient than SSA alone. It works even with a very small population of fast species. Implementation details are discussed, and an application to the modeling of the heat shock response of E. Coli is presented which demonstrates the excellent efficiency and accuracy obtained with the new method.

  9. Random Walk Analysis in Antagonistic Stochastic Games

    DTIC Science & Technology

    2010-07-01

    Journal of Mathematical Analysis and Applications , 353...and Applications, an Honorary Volume of Cambridge Scientific Publishers, Journal of Mathematical Analysis and Applications , Mathematical and Computer...J.H. and Ke, H-J., Multilayers in a Modulated Stochastic Game, Journal of Mathematical Analysis and Applications , 353 (2009), 553-565. [8

  10. Stochastic reaction-diffusion algorithms for macromolecular crowding

    NASA Astrophysics Data System (ADS)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  11. Stochastic Management of the Open Large Water Reservoir with Storage Function with Using a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kozel, Tomas; Stary, Milos

    2016-10-01

    Described models are used random forecasting period of flow line with different length. The length is shorter than 1 year. Forecasting period of flow line is transformed to line of managing discharges with same length as forecast. Adaptive managing is used only first value of line of discharges. Stochastic management is worked with dispersion of controlling discharge value. Main advantage stochastic management is fun of possibilities. In article is described construction and evaluation of adaptive stochastic model base on genetic algorithm (classic optimization method). Model was used for stochastic management of open large water reservoir with storage function. Genetic algorithm is used as optimization algorithm. Forecasted inflow is given to model and controlling discharge value is computed by model for chosen probability of controlling discharge value. Model was tested and validated on made up large open water reservoir. Results of stochastic model were evaluated for given probability and were compared to results of same model for 100% forecast (forecasted values are real values). The management of the large open water reservoir with storage function was done logically and with increased sum number of forecast from 300 to 500 the results given by model were better, but another increased from 500 to 750 and 1000 did not get expected improvement. Influence on course of management was tested for different length forecasted inflow and their sum number. Classical optimization model is needed too much time for calculation, therefore stochastic model base on genetic algorithm was used parallel calculation on cluster.

  12. Stochastic inequality probabilities for adaptively randomized clinical trials.

    PubMed

    Cook, John D; Nadarajah, Saralees

    2006-06-01

    We examine stochastic inequality probabilities of the form P (X > Y) and P (X > max (Y, Z)) where X, Y, and Z are random variables with beta, gamma, or inverse gamma distributions. We discuss the applications of such inequality probabilities to adaptively randomized clinical trials as well as methods for calculating their values.

  13. Random attractor of non-autonomous stochastic Boussinesq lattice system

    SciTech Connect

    Zhao, Min Zhou, Shengfan

    2015-09-15

    In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.

  14. Fast stochastic algorithm for simulating evolutionary population dynamics

    NASA Astrophysics Data System (ADS)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  15. Stochastic Semidefinite Programming: Applications and Algorithms

    DTIC Science & Technology

    2012-03-03

    doi: 2011/09/07 13:38:21 13 TOTAL: 1 Number of Papers published in non peer-reviewed journals: Baha M. Alzalg and K. A. Ariyawansa, Stochastic...symmetric programming over integers. International Conference on Scientific Computing, Las Vegas, Nevada, July 18--21, 2011. Baha M. Alzalg. On recent...Proceeding publications (other than abstracts): PaperReceived Baha M. Alzalg, K. A. Ariyawansa. Stochastic mixed integer second-order cone programming

  16. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  17. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    SciTech Connect

    Xiu, Dongbin

    2016-06-21

    The focus of the project is the development of mathematical methods and high-performance com- putational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly e cient and scalable numer- ical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  18. A Heuristic Initialized Stochastic Memetic Algorithm for MDPVRP With Interdependent Depot Operations.

    PubMed

    Azad, Abdus Salam; Islam, Md Monirul; Chakraborty, Saikat

    2017-01-27

    The vehicle routing problem (VRP) is a widely studied combinatorial optimization problem. We introduce a variant of the multidepot and periodic VRP (MDPVRP) and propose a heuristic initialized stochastic memetic algorithm to solve it. The main challenge in designing such an algorithm for a large combinatorial optimization problem is to avoid premature convergence by maintaining a balance between exploration and exploitation of the search space. We employ intelligent initialization and stochastic learning to address this challenge. The intelligent initialization technique constructs a population by a mix of random and heuristic generated solutions. The stochastic learning enhances the solutions' quality selectively using simulated annealing with a set of random and heuristic operators. The hybridization of randomness and greediness in the initialization and learning process helps to maintain the balance between exploration and exploitation. Our proposed algorithm has been tested extensively on the existing benchmark problems and outperformed the baseline algorithms by a large margin. We further compared our results with that of the state-of-the-art algorithms working under MDPVRP formulation and found a significant improvement over their results.

  19. On stochastic approximation algorithms for classes of PAC learning problems

    SciTech Connect

    Rao, N.S.V.; Uppuluri, V.R.R.; Oblow, E.M.

    1994-03-01

    The classical stochastic approximation methods are shown to yield algorithms to solve several formulations of the PAC learning problem defined on the domain [o,1]{sup d}. Under some assumptions on different ability of the probability measure functions, simple algorithms to solve some PAC learning problems are proposed based on networks of non-polynomial units (e.g. artificial neural networks). Conditions on the sizes of these samples required to ensure the error bounds are derived using martingale inequalities.

  20. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  1. Stochastic Simulation of Microseisms Using Theory of Conditional Random Fields

    NASA Astrophysics Data System (ADS)

    Morikawa, H.; Akamatsu, J.; Nishimura, K.; Onoue, K.; Kameda, H.

    -We examine the applicability of conditional stochastic simulation to interpretation of microseisms observed on soft soil sediments at Kushiro, Hokkaido, Japan. The theory of conditional random fields developed by Kameda and Morikawa (1994) is used, which allows one to perform interpolation of a Gaussian stochastic time-space field that is conditioned by realized values of time functions specified at some discrete locations. The applicability is examined by a blind test, that is, by comparing a set of simulated seismograms and recorded ones obtained from three-point array observa tions. A test of fitness was performed by means of the sign test. It is concluded that the method is applicable to interpretation of microseisms, and that the wave field of microseisms can be treated as Gaussian random fields both in time and space.

  2. Moderate Deviations for Recursive Stochastic Algorithms

    DTIC Science & Technology

    2014-08-02

    to (2.14) 1 n n1X i=0 E[R(ni k Xni )] KE a2(n)n : Because of this the (random) Radon -Nikodym derivatives fni (y) = dni d Xni (y) are well de...ned and can be selected in a measurable way. We will control the magnitude of the noise when the Radon -Nikodym derivative is large by bounding 1 n n

  3. Stochastic calculus for uncoupled continuous-time random walks.

    PubMed

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L

    2009-06-01

    The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy alpha -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.

  4. Stochastic calculus for uncoupled continuous-time random walks

    NASA Astrophysics Data System (ADS)

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.

    2009-06-01

    The continuous-time random walk (CTRW) is a pure-jump stochastic process with several applications not only in physics but also in insurance, finance, and economics. A definition is given for a class of stochastic integrals driven by a CTRW, which includes the Itō and Stratonovich cases. An uncoupled CTRW with zero-mean jumps is a martingale. It is proved that, as a consequence of the martingale transform theorem, if the CTRW is a martingale, the Itō integral is a martingale too. It is shown how the definition of the stochastic integrals can be used to easily compute them by Monte Carlo simulation. The relations between a CTRW, its quadratic variation, its Stratonovich integral, and its Itō integral are highlighted by numerical calculations when the jumps in space of the CTRW have a symmetric Lévy α -stable distribution and its waiting times have a one-parameter Mittag-Leffler distribution. Remarkably, these distributions have fat tails and an unbounded quadratic variation. In the diffusive limit of vanishing scale parameters, the probability density of this kind of CTRW satisfies the space-time fractional diffusion equation (FDE) or more in general the fractional Fokker-Planck equation, which generalizes the standard diffusion equation, solved by the probability density of the Wiener process, and thus provides a phenomenologic model of anomalous diffusion. We also provide an analytic expression for the quadratic variation of the stochastic process described by the FDE and check it by Monte Carlo.

  5. Random search optimization based on genetic algorithm and discriminant function

    NASA Technical Reports Server (NTRS)

    Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.

    1990-01-01

    The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.

  6. A Global Optimization Algorithm Using Stochastic Differential Equations.

    DTIC Science & Technology

    1985-02-01

    Bari (Italy).2Istituto di Fisica , 2 UniversitA di Roma "Tor Vergata", Via Orazio Raimondo, 00173 (La Romanina) Roma (Italy). 3Istituto di Matematica ...accompanying Algorithm. lDipartininto di Matematica , Universita di Bari, 70125 Bar (Italy). Istituto di Fisica , 2a UniversitA di Roim ’"Tor Vergata", Via...Optimization, Stochastic Differential Equations Work Unit Number 5 (Optimization and Large Scale Systems) 6Dipartimento di Matematica , Universita di Bari, 70125

  7. Random attractors for the stochastic coupled fractional Ginzburg-Landau equation with additive noise

    SciTech Connect

    Shu, Ji E-mail: 530282863@qq.com; Li, Ping E-mail: 530282863@qq.com; Zhang, Jia; Liao, Ou

    2015-10-15

    This paper is concerned with the stochastic coupled fractional Ginzburg-Landau equation with additive noise. We first transform the stochastic coupled fractional Ginzburg-Landau equation into random equations whose solutions generate a random dynamical system. Then we prove the existence of random attractor for random dynamical system.

  8. Decomposition algorithms for stochastic programming on a computational grid.

    SciTech Connect

    Linderoth, J.; Wright, S.; Mathematics and Computer Science; Axioma Inc.

    2003-01-01

    We describe algorithms for two-stage stochastic linear programming with recourse and their implementation on a grid computing platform. In particular, we examine serial and asynchronous versions of the L-shaped method and a trust-region method. The parallel platform of choice is the dynamic, heterogeneous, opportunistic platform provided by the Condor system. The algorithms are of master-worker type (with the workers being used to solve second-stage problems), and the MW runtime support library (which supports master-worker computations) is key to the implementation. Computational results are presented on large sample-average approximations of problems from the literature.

  9. Validation of a stochastic digital packing algorithm for porosity prediction in fluvial gravel deposits

    NASA Astrophysics Data System (ADS)

    Liang, Rui; Schruff, Tobias; Jia, Xiaodong; Schüttrumpf, Holger; Frings, Roy M.

    2015-11-01

    Porosity as one of the key properties of sediment mixtures is poorly understood. Most of the existing porosity predictors based upon grain size characteristics have been unable to produce satisfying results for fluvial sediment porosity, due to the lack of consideration of other porosity-controlling factors like grain shape and depositional condition. Considering this, a stochastic digital packing algorithm was applied in this work, which provides an innovative way to pack particles of arbitrary shapes and sizes based on digitization of both particles and packing space. The purpose was to test the applicability of this packing algorithm in predicting fluvial sediment porosity by comparing its predictions with outcomes obtained from laboratory measurements. Laboratory samples examined were two natural fluvial sediments from the Rhine River and Kall River (Germany), and commercial glass beads (spheres). All samples were artificially combined into seven grain size distributions: four unimodal distributions and three bimodal distributions. Our study demonstrates that apart from grain size, grain shape also has a clear impact on porosity. The stochastic digital packing algorithm successfully reproduced the measured variations in porosity for the three different particle sources. However, the packing algorithm systematically overpredicted the porosity measured in random dense packing conditions, mainly because the random motion of particles during settling introduced unwanted kinematic sorting and shape effects. The results suggest that the packing algorithm produces loose packing structures, and is useful for trend analysis of packing porosity.

  10. Stochastic deletion-insertion algorithm to construct dense linkage maps.

    PubMed

    Wu, Jixiang; Lou, Xiang-Yang; Gonda, Michael

    2011-01-01

    In this study, we proposed a stochastic deletion-insertion (SDI) algorithm for constructing large-scale linkage maps. This SDI algorithm was compared with three published approximation approaches, the seriation (SER), neighbor mapping (NM), and unidirectional growth (UG) approaches, on the basis of simulated F(2) data with different population sizes, missing genotype rates, and numbers of markers. Simulation results showed that the SDI method had a similar or higher percentage of correct linkage orders than the other three methods. This SDI algorithm was also applied to a real dataset and compared with the other three methods. The total linkage map distance (cM) obtained by the SDI method (148.08 cM) was smaller than the distance obtained by SER (225.52 cM) and two published distances (150.11 cM and 150.38 cM). Since this SDI algorithm is stochastic, a more accurate linkage order can be quickly obtained by repeating this algorithm. Thus, this SDI method, which combines the advantages of accuracy and speed, is an important addition to the current linkage mapping toolkit for constructing improved linkage maps.

  11. Randomized Algorithms for Matrices and Data

    NASA Astrophysics Data System (ADS)

    Mahoney, Michael W.

    2012-03-01

    This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this

  12. Stochastic simulation for imaging spatial uncertainty: Comparison and evaluation of available algorithms

    SciTech Connect

    Gotway, C.A.; Rutherford, B.M.

    1993-09-01

    Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.

  13. Application of stochastic processes in random growth and evolutionary dynamics

    NASA Astrophysics Data System (ADS)

    Oikonomou, Panagiotis

    We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically

  14. Stochastic Kinetic Monte Carlo algorithms for long-range Hamiltonians

    SciTech Connect

    Mason, D R; Rudd, R E; Sutton, A P

    2003-10-13

    We present a higher order kinetic Monte Carlo methodology suitable to model the evolution of systems in which the transition rates are non- trivial to calculate or in which Monte Carlo moves are likely to be non- productive flicker events. The second order residence time algorithm first introduced by Athenes et al.[1] is rederived from the n-fold way algorithm of Bortz et al.[2] as a fully stochastic algorithm. The second order algorithm can be dynamically called when necessary to eliminate unproductive flickering between a metastable state and its neighbors. An algorithm combining elements of the first order and second order methods is shown to be more efficient, in terms of the number of rate calculations, than the first order or second order methods alone while remaining statistically identical. This efficiency is of prime importance when dealing with computationally expensive rate functions such as those arising from long- range Hamiltonians. Our algorithm has been developed for use when considering simulations of vacancy diffusion under the influence of elastic stress fields. We demonstrate the improved efficiency of the method over that of the n-fold way in simulations of vacancy diffusion in alloys. Our algorithm is seen to be an order of magnitude more efficient than the n-fold way in these simulations. We show that when magnesium is added to an Al-2at.%Cu alloy, this has the effect of trapping vacancies. When trapping occurs, we see that our algorithm performs thousands of events for each rate calculation performed.

  15. Implementing Quality Control on a Random Number Stream to Improve a Stochastic Weather Generator

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For decades stochastic modelers have used computerized random number generators to produce random numeric sequences fitting a specified statistical distribution. Unfortunately, none of the random number generators we tested satisfactorily produced the target distribution. The result is generated d...

  16. An adaptive multi-level simulation algorithm for stochastic biological systems.

    PubMed

    Lester, C; Yates, C A; Giles, M B; Baker, R E

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  17. State-dependent doubly weighted stochastic simulation algorithm for automatic characterization of stochastic biochemical rare events

    NASA Astrophysics Data System (ADS)

    Roh, Min K.; Daigle, Bernie J.; Gillespie, Dan T.; Petzold, Linda R.

    2011-12-01

    In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)], 10.1063/1.2987701. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA—the state-dependent doubly weighted SSA (sdwSSA)—that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.

  18. A stochastic maximum principle for backward control systems with random default time

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Kuen Siu, Tak

    2013-05-01

    This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.

  19. An integrated optimal control algorithm for discrete-time nonlinear stochastic system

    NASA Astrophysics Data System (ADS)

    Kek, Sie Long; Lay Teo, Kok; Mohd Ismail, A. A.

    2010-12-01

    Consider a discrete-time nonlinear system with random disturbances appearing in the real plant and the output channel where the randomly perturbed output is measurable. An iterative procedure based on the linear quadratic Gaussian optimal control model is developed for solving the optimal control of this stochastic system. The optimal state estimate provided by Kalman filtering theory and the optimal control law obtained from the linear quadratic regulator problem are then integrated into the dynamic integrated system optimisation and parameter estimation algorithm. The iterative solutions of the optimal control problem for the model obtained converge to the solution of the original optimal control problem of the discrete-time nonlinear system, despite model-reality differences, when the convergence is achieved. An illustrative example is solved using the method proposed. The results obtained show the effectiveness of the algorithm proposed.

  20. A stochastic approximation algorithm with Markov chain Monte-carlo method for incomplete data estimation problems.

    PubMed

    Gu, M G; Kong, F H

    1998-06-23

    We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.

  1. Hybrid solution of stochastic optimal control problems using Gauss pseudospectral method and generalized polynomial chaos algorithms

    NASA Astrophysics Data System (ADS)

    Cottrill, Gerald C.

    A hybrid numerical algorithm combining the Gauss Pseudospectral Method (GPM) with a Generalized Polynomial Chaos (gPC) method to solve nonlinear stochastic optimal control problems with constraint uncertainties is presented. TheGPM and gPC have been shown to be spectrally accurate numerical methods for solving deterministic optimal control problems and stochastic differential equations, respectively. The gPC uses collocation nodes to sample the random space, which are then inserted into the differential equations and solved by applying standard differential equation methods. The resulting set of deterministic solutions is used to characterize the distribution of the solution by constructing a polynomial representation of the output as a function of uncertain parameters. Optimal control problems are especially challenging to solve since they often include path constraints, bounded controls, boundary conditions, and require solutions that minimize a cost functional. Adding random parameters can make these problems even more challenging. The hybrid algorithm presented in this dissertation is the first time the GPM and gPC algorithms have been combined to solve optimal control problems with random parameters. Using the GPM in the gPC construct provides minimum cost deterministic solutions used in stochastic computations that meet path, control, and boundary constraints, thus extending current gPC methods to be applicable to stochastic optimal control problems. The hybrid GPM-gPC algorithm was applied to two concept demonstration problems: a nonlinear optimal control problem with multiplicative uncertain elements and a trajectory optimization problem simulating an aircraft flying through a threat field where exact locations of the threats are unknown. The results show that the expected value, variance, and covariance statistics of the polynomial output function approximations of the state, control, cost, and terminal time variables agree with Monte-Carlo simulation

  2. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    SciTech Connect

    DeVille, R.E.L.; Riemer, N.; West, M.

    2011-09-20

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

  3. Selecting materialized views using random algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi

    2007-04-01

    The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.

  4. Stochastic optimization with randomized smoothing for image registration.

    PubMed

    Sun, Wei; Poot, Dirk H J; Smal, Ihor; Yang, Xuan; Niessen, Wiro J; Klein, Stefan

    2017-01-01

    Image registration is typically formulated as an optimization process, which aims to find the optimal transformation parameters of a given transformation model by minimizing a cost function. Local minima may exist in the optimization landscape, which could hamper the optimization process. To eliminate local minima, smoothing the cost function would be desirable. In this paper, we investigate the use of a randomized smoothing (RS) technique for stochastic gradient descent (SGD) optimization, to effectively smooth the cost function. In this approach, Gaussian noise is added to the transformation parameters prior to computing the cost function gradient in each iteration of the SGD optimizer. The approach is suitable for both rigid and nonrigid registrations. Experiments on synthetic images, cell images, public CT lung data, and public MR brain data demonstrate the effectiveness of the novel RS technique in terms of registration accuracy and robustness.

  5. Stochastic optimization algorithm for inverse modeling of air pollution

    NASA Astrophysics Data System (ADS)

    Yeo, Kyongmin; Hwang, Youngdeok; Liu, Xiao; Kalagnanam, Jayant

    2016-11-01

    A stochastic optimization algorithm to estimate a smooth source function from a limited number of observations is proposed in the context of air pollution, where the source-receptor relation is given by an advection-diffusion equation. First, a smooth source function is approximated by a set of Gaussian kernels on a rectangular mesh system. Then, the generalized polynomial chaos (gPC) expansion is used to represent the model uncertainty due to the choice of the mesh system. It is shown that the convolution of gPC basis and the Gaussian kernel provides hierarchical basis functions for a spectral function estimation. The spectral inverse model is formulated as a stochastic optimization problem. We propose a regularization strategy based on the hierarchical nature of the basis polynomials. It is shown that the spectral inverse model is capable of providing a good estimate of the source function even when the number of unknown parameters (m) is much larger the number of data (n), m/n > 50.

  6. Modeling Langmuir isotherms with the Gillespie stochastic algorithm.

    PubMed

    Epstein, J; Michael, J; Mandona, C; Marques, F; Dias-Cabral, A C; Thrash, M

    2015-02-06

    The overall goal of this work is to develop a robust modeling approach that is capable of simulating single and multicomponent isotherms for biological molecules interacting with a variety of adsorbents. Provided the ratio between the forward and reverse adsorption/desorption constants is known, the Gillespie stochastic algorithm has been shown to be effective in modeling isotherms consistent with the Langmuir theory and uptake curves that fall outside this traditional approach. We have used this method to model protein adsorption on ion-exchange adsorbents, hydrophobic interactive adsorbents and ice crystals. In our latest efforts we have applied the Gillespie approach to simulate binary and ternary isotherms from the literature involving gas-solid adsorption applications. In each case the model is consistent with the experimental results presented.

  7. Stochastic optimization algorithm selection in hydrological model calibration based on fitness landscape characterization

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Brissette, François P.; Poulin, Annie; Côté, Pascal; Martel, Jean-Luc

    2014-05-01

    The process of hydrological model parameter calibration is routinely performed with the help of stochastic optimization algorithms. Many such algorithms have been created and they sometimes provide varying levels of performance (as measured by an efficiency metric such as Nash-Sutcliffe). This is because each algorithm is better suited for one type of optimization problem rather than another. This research project's aim was twofold. First, it was sought upon to find various features in the calibration problem fitness landscapes to map the encountered problem types to the best possible optimization algorithm. Second, the optimal number of model evaluations in order to minimize resources usage and maximize overall model quality was investigated. A total of five stochastic optimization algorithms (SCE-UA, CMAES, DDS, PSO and ASA) were used to calibrate four lumped hydrological models (GR4J, HSAMI, HMETS and MOHYSE) on 421 basins from the US MOPEX database. Each of these combinations was performed using three objective functions (Log(RMSE), NSE, and a metric combining NSE, RMSE and BIAS) to add sufficient diversity to the fitness landscapes. Each run was performed 30 times for statistical analysis. With every parameter set tested during the calibration process, the validation value was taken on a separate period. It was then possible to outline the calibration skill versus the validation skill for the different algorithms. Fitness landscapes were characterized by various metrics, such as the dispersion metric, the mean distance between random points and their respective local minima (found through simple hill-climbing algorithms) and the mean distance between the local minima and the best local optimum found. These metrics were then compared to the calibration score of the various optimization algorithms. Preliminary results tend to show that fitness landscapes presenting a globally convergent structure are more prevalent than other types of landscapes in this

  8. Implementation and performance of stochastic parallel gradient descent algorithm for atmospheric turbulence compensation

    NASA Astrophysics Data System (ADS)

    Finney, Greg A.; Persons, Christopher M.; Henning, Stephan; Hazen, Jessie; Whitley, Daniel

    2014-06-01

    IERUS Technologies, Inc. and the University of Alabama in Huntsville have partnered to perform characterization and development of algorithms and hardware for adaptive optics. To date the algorithm work has focused on implementation of the stochastic parallel gradient descent (SPGD) algorithm. SPGD is a metric-based approach in which a scalar metric is optimized by taking random perturbative steps for many actuators simultaneously. This approach scales to systems with a large number of actuators while maintaining bandwidth, while conventional methods are negatively impacted by the very large matrix multiplications that are required. The metric approach enables the use of higher speed sensors with fewer (or even a single) sensing element(s), enabling a higher control bandwidth. Furthermore, the SPGD algorithm is model-free, and thus is not strongly impacted by the presence of nonlinearities which degrade the performance of conventional phase reconstruction methods. Finally, for high energy laser applications, SPGD can be performed using the primary laser beam without the need for an additional beacon laser. The conventional SPGD algorithm was modified to use an adaptive gain to improve convergence while maintaining low steady state error. Results from laboratory experiments using phase plates as atmosphere surrogates will be presented, demonstrating areas in which the adaptive gain yields better performance and areas which require further investigation.

  9. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  10. A genetic algorithm for the arrival probability in the stochastic networks.

    PubMed

    Shirdel, Gholam H; Abdolhosseinzadeh, Mohsen

    2016-01-01

    A genetic algorithm is presented to find the arrival probability in a directed acyclic network with stochastic parameters, that gives more reliability of transmission flow in delay sensitive networks. Some sub-networks are extracted from the original network, and a connection is established between the original source node and the original destination node by randomly selecting some local source and the local destination nodes. The connections are sorted according to their arrival probabilities and the best established connection is determined with the maximum arrival probability. There is an established discrete time Markov chain in the network. The arrival probability to a given destination node from a given source node in the network is defined as the multi-step transition probability of the absorbtion in the final state of the established Markov chain. The proposed method is applicable on large stochastic networks, where the previous methods were not. The effectiveness of the proposed method is illustrated by some numerical results with perfect fitness values of the proposed genetic algorithm.

  11. Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs

    DOE PAGES

    Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...

    2016-04-02

    We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.

  12. Non-divergence of stochastic discrete time algorithms for PCA neural networks.

    PubMed

    Lv, Jian Cheng; Yi, Zhang; Li, Yunxia

    2015-02-01

    Learning algorithms play an important role in the practical application of neural networks based on principal component analysis, often determining the success, or otherwise, of these applications. These algorithms cannot be divergent, but it is very difficult to directly study their convergence properties, because they are described by stochastic discrete time (SDT) algorithms. This brief analyzes the original SDT algorithms directly, and derives some invariant sets that guarantee the nondivergence of these algorithms in a stochastic environment by selecting proper learning parameters. Our theoretical results are verified by a series of simulation examples.

  13. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  14. Stochastic sensitivity and variability of glycolytic oscillations in the randomly forced Sel'kov model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev

    2017-01-01

    In present paper, we study underlying mechanisms of the stochastic excitability in glycolysis on the example of the model proposed by Sel'kov. A stochastic variant of this model with the randomly forced influx of the substrate is considered. Our analysis is based on the stochastic sensitivity function technique. A detailed parametric analysis of the stochastic sensitivity of attractors is carried out. A range of parameters where the stochastic model is highly sensitive to noise is determined, and a supersensitive Canard cycle is found. Phenomena of the stochastic excitability and variability of forced equilibria and cycles are demonstrated and studied. It is shown that in the zone of Canard cycles noise-induced chaos is observed.

  15. Pattern Search Ranking and Selection Algorithms for Mixed-Variable Optimization of Stochastic Systems

    DTIC Science & Technology

    2004-09-01

    optimization problems with stochastic objective functions and a mixture of design variable types. The generalized pattern search (GPS) class of algorithms is...provide computational enhancements to the basic algorithm. Im- plementation alternatives include the use of modern R&S procedures designed to provide...83 vii Page 4.3 Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4 Algorithm Design

  16. On Wiener-Masani's algorithm for finding the generating function of multivariate stochastic processes

    NASA Technical Reports Server (NTRS)

    Miamee, A. G.

    1988-01-01

    It is shown that the algorithms for determining the generating function and prediction error matrix of multivariate stationary stochastic processes developed by Wiener and Masani (1957), and later by Masani (1960) will work in some more general setting.

  17. Convergence rates of finite difference stochastic approximation algorithms part I: general sampling

    NASA Astrophysics Data System (ADS)

    Dai, Liyi

    2016-05-01

    Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. The analysis is carried out under a general framework covering a wide range of updating scenarios. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences.

  18. Fluorescence microscopy image noise reduction using a stochastically-connected random field model

    PubMed Central

    Haider, S. A.; Cameron, A.; Siva, P.; Lui, D.; Shafiee, M. J.; Boroomand, A.; Haider, N.; Wong, A.

    2016-01-01

    Fluorescence microscopy is an essential part of a biologist’s toolkit, allowing assaying of many parameters like subcellular localization of proteins, changes in cytoskeletal dynamics, protein-protein interactions, and the concentration of specific cellular ions. A fundamental challenge with using fluorescence microscopy is the presence of noise. This study introduces a novel approach to reducing noise in fluorescence microscopy images. The noise reduction problem is posed as a Maximum A Posteriori estimation problem, and solved using a novel random field model called stochastically-connected random field (SRF), which combines random graph and field theory. Experimental results using synthetic and real fluorescence microscopy data show the proposed approach achieving strong noise reduction performance when compared to several other noise reduction algorithms, using quantitative metrics. The proposed SRF approach was able to achieve strong performance in terms of signal-to-noise ratio in the synthetic results, high signal to noise ratio and contrast to noise ratio in the real fluorescence microscopy data results, and was able to maintain cell structure and subtle details while reducing background and intra-cellular noise. PMID:26884148

  19. Some Randomized Algorithms for Convex Quadratic Programming

    SciTech Connect

    Goldbach, R.

    1999-01-15

    We adapt some randomized algorithms of Clarkson [3] for linear programming to the framework of so-called LP-type problems, which was introduced by Sharir and Welzl [10]. This framework is quite general and allows a unified and elegant presentation and analysis. We also show that LP-type problems include minimization of a convex quadratic function subject to convex quadratic constraints as a special case, for which the algorithms can be implemented efficiently, if only linear constraints are present. We show that the expected running times depend only linearly on the number of constraints, and illustrate this by some numerical results. Even though the framework of LP-type problems may appear rather abstract at first, application of the methods considered in this paper to a given problem of that type is easy and efficient. Moreover, our proofs are in fact rather simple, since many technical details of more explicit problem representations are handled in a uniform manner by our approach. In particular, we do not assume boundedness of the feasible set as required in related methods.

  20. Algorithms for integration of stochastic differential equations using parallel optimized sampling in the Stratonovich calculus

    NASA Astrophysics Data System (ADS)

    Kiesewetter, Simon; Drummond, Peter D.

    2017-03-01

    A variance reduction method for stochastic integration of Fokker-Planck equations is derived. This unifies the cumulant hierarchy and stochastic equation approaches to obtaining moments, giving a performance superior to either. We show that the brute force method of reducing sampling error by just using more trajectories in a sampled stochastic equation is not the best approach. The alternative of using a hierarchy of moment equations is also not optimal, as it may converge to erroneous answers. Instead, through Bayesian conditioning of the stochastic noise on the requirement that moment equations are satisfied, we obtain improved results with reduced sampling errors for a given number of stochastic trajectories. The method used here converges faster in time-step than Ito-Euler algorithms. This parallel optimized sampling (POS) algorithm is illustrated by several examples, including a bistable nonlinear oscillator case where moment hierarchies fail to converge.

  1. Stochastic bifurcations in the nonlinear vibroimpact system with fractional derivative under random excitation

    NASA Astrophysics Data System (ADS)

    Yang, Yongge; Xu, Wei; Sun, Yahui; Xiao, Yanwen

    2017-01-01

    This paper aims to investigate the stochastic bifurcations in the nonlinear vibroimpact system with fractional derivative under random excitation. Firstly, the original stochastic vibroimpact system with fractional derivative is transformed into equivalent stochastic vibroimpact system without fractional derivative. Then, the non-smooth transformation and stochastic averaging method are used to obtain the analytical solutions of the equivalent stochastic system. At last, in order to verify the effectiveness of the above mentioned approach, the van der Pol vibroimpact system with fractional derivative is worked out in detail. A very satisfactory agreement can be found between the analytical results and the numerical results. An interesting phenomenon we found in this paper is that the fractional order and fractional coefficient of the stochastic van der Pol vibroimpact system can induce the occurrence of stochastic P-bifurcation. To the best of authors' knowledge, the stochastic P-bifurcation phenomena induced by fractional order and fractional coefficient have not been found in the present available literature which studies the dynamical behaviors of stochastic system with fractional derivative under Gaussian white noise excitation.

  2. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  3. Stochastic Mechanisms of Cell Fate Specification that Yield Random or Robust Outcomes

    PubMed Central

    Johnston, Robert J.; Desplan, Claude

    2011-01-01

    Although cell fate specification is tightly controlled to yield highly reproducible results and avoid extreme variation, developmental programs often incorporate stochastic mechanisms to diversify cell types. Stochastic specification phenomena are observed in a wide range of species and an assorted set of developmental contexts. In bacteria, stochastic mechanisms are utilized to generate transient subpopulations capable of surviving adverse environmental conditions. In vertebrate, insect, and worm nervous systems, stochastic fate choices are used to increase the repertoire of sensory and motor neuron subtypes. Random fate choices are also integrated into developmental programs controlling organogenesis. Although stochastic decisions can be maintained to produce a mosaic of fates within a population of cells, they can also be compensated for or directed to yield robust and reproducible outcomes. PMID:20590453

  4. Identification of causal effects using instrumental variables in randomized trials with stochastic compliance.

    PubMed

    Scosyrev, Emil

    2013-01-01

    In randomized trials with imperfect compliance, it is sometimes recommended to supplement the intention-to-treat estimate with an instrumental variable (IV) estimate, which is consistent for the effect of treatment administration in those subjects who would get treated if randomized to treatment and would not get treated if randomized to control. The IV estimation however has been criticized for its reliance on simultaneous existence of complementary "fatalistic" compliance states. The objective of the present paper is to identify some sufficient conditions for consistent estimation of treatment effects in randomized trials with stochastic compliance. It is shown that in the stochastic framework, the classical IV estimator is generally inconsistent for the population-averaged treatment effect. However, even under stochastic compliance, with certain common experimental designs the IV estimator and a simple alternative estimator can be used for consistent estimation of the effect of treatment administration in well-defined and identifiable subsets of the study population.

  5. Mathematical analysis and algorithms for efficiently and accurately implementing stochastic simulations of short-term synaptic depression and facilitation.

    PubMed

    McDonnell, Mark D; Mohan, Ashutosh; Stricker, Christian

    2013-01-01

    The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential (AP) at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of AP arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic AP, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.

  6. Emergence of patterns in random processes. II. Stochastic structure in random events

    NASA Astrophysics Data System (ADS)

    Newman, William I.

    2014-06-01

    Random events can present what appears to be a pattern in the length of peak-to-peak sequences in time series and other point processes. Previously, we showed that this was the case in both individual and independently distributed processes as well as for Brownian walks. In addition, we introduced the use of the discrete form of the Langevin equation of statistical mechanics as a device for connecting the two limiting sets of behaviors, which we then compared with a variety of observations from the physical and social sciences. Here, we establish a probabilistic framework via the Smoluchowski equation for exploring the Langevin equation and its expected peak-to-peak sequence lengths, and we introduce a concept we call "stochastic structure in random events," or SSRE. We extend the Brownian model to include antipersistent processes via autoregressive (AR) models. We relate the latter to describe the behavior of Old Faithful Geyser in Yellowstone National Park, and we devise a further test for the validity of the Langevin and AR models. Given our analytic results, we show how the Langevin equation can be adapted to describe population cycles of three to four years observed among many mammalian species in biology.

  7. Emergence of patterns in random processes. II. Stochastic structure in random events.

    PubMed

    Newman, William I

    2014-06-01

    Random events can present what appears to be a pattern in the length of peak-to-peak sequences in time series and other point processes. Previously, we showed that this was the case in both individual and independently distributed processes as well as for Brownian walks. In addition, we introduced the use of the discrete form of the Langevin equation of statistical mechanics as a device for connecting the two limiting sets of behaviors, which we then compared with a variety of observations from the physical and social sciences. Here, we establish a probabilistic framework via the Smoluchowski equation for exploring the Langevin equation and its expected peak-to-peak sequence lengths, and we introduce a concept we call "stochastic structure in random events," or SSRE. We extend the Brownian model to include antipersistent processes via autoregressive (AR) models. We relate the latter to describe the behavior of Old Faithful Geyser in Yellowstone National Park, and we devise a further test for the validity of the Langevin and AR models. Given our analytic results, we show how the Langevin equation can be adapted to describe population cycles of three to four years observed among many mammalian species in biology.

  8. The stochastic link equilibrium strategy and algorithm for flow assignment in communication networks

    NASA Astrophysics Data System (ADS)

    Tao, Yang; Zhou, Xia

    2005-11-01

    Based on the mature user equilibrium (UE) theory in transportation field as well as the similarity of network flow between transportation and communication, in this paper, the user equilibrium theory was applied to communication networks, and how to apply the stochastic user equilibrium (SUE) to flow assigning in generalized communication networks was further studied. The stochastic link equilibrium (SLE) flow assignment strategy was proposed in this paper, the algorithm of SLE flow assignment was also provided. Both analyses and simulation based on the given algorithm proved that the optimal flow assignment in networks can be achieved by using this algorithm.

  9. A new model for realistic random perturbations of stochastic oscillators

    NASA Astrophysics Data System (ADS)

    Dieci, Luca; Li, Wuchen; Zhou, Haomin

    2016-08-01

    Classical theories predict that solutions of differential equations will leave any neighborhood of a stable limit cycle, if white noise is added to the system. In reality, many engineering systems modeled by second order differential equations, like the van der Pol oscillator, show incredible robustness against noise perturbations, and the perturbed trajectories remain in the neighborhood of a stable limit cycle for all times of practical interest. In this paper, we propose a new model of noise to bridge this apparent discrepancy between theory and practice. Restricting to perturbations from within this new class of noise, we consider stochastic perturbations of second order differential systems that -in the unperturbed case- admit asymptotically stable limit cycles. We show that the perturbed solutions are globally bounded and remain in a tubular neighborhood of the underlying deterministic periodic orbit. We also define stochastic Poincaré map(s), and further derive partial differential equations for the transition density function.

  10. Comparison of several stochastic parallel optimization algorithms for adaptive optics system without a wavefront sensor

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Li, Xinyang

    2011-04-01

    Optimizing the system performance metric directly is an important method for correcting wavefront aberrations in an adaptive optics (AO) system where wavefront sensing methods are unavailable or ineffective. An appropriate "Deformable Mirror" control algorithm is the key to successful wavefront correction. Based on several stochastic parallel optimization control algorithms, an adaptive optics system with a 61-element Deformable Mirror (DM) is simulated. Genetic Algorithm (GA), Stochastic Parallel Gradient Descent (SPGD), Simulated Annealing (SA) and Algorithm Of Pattern Extraction (Alopex) are compared in convergence speed and correction capability. The results show that all these algorithms have the ability to correct for atmospheric turbulence. Compared with least squares fitting, they almost obtain the best correction achievable for the 61-element DM. SA is the fastest and GA is the slowest in these algorithms. The number of perturbation by GA is almost 20 times larger than that of SA, 15 times larger than SPGD and 9 times larger than Alopex.

  11. D-leaping: Accelerating stochastic simulation algorithms for reactions with delays

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2009-09-01

    We propose a novel, accelerated algorithm for the approximate stochastic simulation of biochemical systems with delays. The present work extends existing accelerated algorithms by distributing, in a time adaptive fashion, the delayed reactions so as to minimize the computational effort while preserving their accuracy. The accuracy of the present algorithm is assessed by comparing its results to those of the corresponding delay differential equations for a representative biochemical system. In addition, the fluctuations produced from the present algorithm are comparable to those from an exact stochastic simulation with delays. The algorithm is used to simulate biochemical systems that model oscillatory gene expression. The results indicate that the present algorithm is competitive with existing works for several benchmark problems while it is orders of magnitude faster for certain systems of biochemical reactions.

  12. Investigation of stochastic radiation transport methods in random heterogeneous mixtures

    NASA Astrophysics Data System (ADS)

    Reinert, Dustin Ray

    Among the most formidable challenges facing our world is the need for safe, clean, affordable energy sources. Growing concerns over global warming induced climate change and the rising costs of fossil fuels threaten conventional means of electricity production and are driving the current nuclear renaissance. One concept at the forefront of international development efforts is the High Temperature Gas-Cooled Reactor (HTGR). With numerous passive safety features and a meltdown-proof design capable of attaining high thermodynamic efficiencies for electricity generation as well as high temperatures useful for the burgeoning hydrogen economy, the HTGR is an extremely promising technology. Unfortunately, the fundamental understanding of neutron behavior within HTGR fuels lags far behind that of more conventional water-cooled reactors. HTGRs utilize a unique heterogeneous fuel element design consisting of thousands of tiny fissile fuel kernels randomly mixed with a non-fissile graphite matrix. Monte Carlo neutron transport simulations of the HTGR fuel element geometry in its full complexity are infeasible and this has motivated the development of more approximate computational techniques. A series of MATLAB codes was written to perform Monte Carlo simulations within HTGR fuel pebbles to establish a comprehensive understanding of the parameters under which the accuracy of the approximate techniques diminishes. This research identified the accuracy of the chord length sampling method to be a function of the matrix scattering optical thickness, the kernel optical thickness, and the kernel packing density. Two new Monte Carlo methods designed to focus the computational effort upon the parameter conditions shown to contribute most strongly to the overall computational error were implemented and evaluated. An extended memory chord length sampling routine that recalls a neutron's prior material traversals was demonstrated to be effective in fixed source calculations containing

  13. Limited variance control in statistical low thrust guidance analysis. [stochastic algorithm for SEP comet Encke flyby mission

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1975-01-01

    Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.

  14. A Simple Genetic Algorithm for Calibration of Stochastic Rock Discontinuity Networks

    NASA Astrophysics Data System (ADS)

    Jimenez, R.; Jurado-Piña, R.

    2012-07-01

    We present a novel approach for calibration of stochastic discontinuity network parameters based on genetic algorithms (GAs). To validate the approach, examples of application of the method to cases with known parameters of the original Poisson discontinuity network are presented. Parameters of the model are encoded as chromosomes using a binary representation, and such chromosomes evolve as successive generations of a randomly generated initial population, subjected to GA operations of selection, crossover and mutation. Such back-calculated parameters are employed to make assessments about the inference capabilities of the model using different objective functions with different probabilities of crossover and mutation. Results show that the predictive capabilities of GAs significantly depend on the type of objective function considered; and they also show that the calibration capabilities of the genetic algorithm can be acceptable for practical engineering applications, since in most cases they can be expected to provide parameter estimates with relatively small errors for those parameters of the network (such as intensity and mean size of discontinuities) that have the strongest influence on many engineering applications.

  15. Image estimation using doubly stochastic gaussian random field models.

    PubMed

    Woods, J W; Dravida, S; Mediavilla, R

    1987-02-01

    The two-dimensional (2-D) doubly stochastic Gaussian (DSG) model was introduced by one of the authors to provide a complete model for spatial filters which adapt to the local structure in an image signal. Here we present the optimal estimator and 2-D fixed-lag smoother for this DSG model extending earlier work of Ackerson and Fu. As the optimal estimator has an exponentially growing state space, we investigate suboptimal estimators using both a tree and a decision-directed method. Experimental results are presented.

  16. Representation of nonlinear random transformations by non-gaussian stochastic neural networks.

    PubMed

    Turchetti, Claudio; Crippa, Paolo; Pirani, Massimiliano; Biagetti, Giorgio

    2008-06-01

    The learning capability of neural networks is equivalent to modeling physical events that occur in the real environment. Several early works have demonstrated that neural networks belonging to some classes are universal approximators of input-output deterministic functions. Recent works extend the ability of neural networks in approximating random functions using a class of networks named stochastic neural networks (SNN). In the language of system theory, the approximation of both deterministic and stochastic functions falls within the identification of nonlinear no-memory systems. However, all the results presented so far are restricted to the case of Gaussian stochastic processes (SPs) only, or to linear transformations that guarantee this property. This paper aims at investigating the ability of stochastic neural networks to approximate nonlinear input-output random transformations, thus widening the range of applicability of these networks to nonlinear systems with memory. In particular, this study shows that networks belonging to a class named non-Gaussian stochastic approximate identity neural networks (SAINNs) are capable of approximating the solutions of large classes of nonlinear random ordinary differential transformations. The effectiveness of this approach is demonstrated and discussed by some application examples.

  17. Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models

    NASA Technical Reports Server (NTRS)

    Mjoisness, Eric; Castano, Rebecca; Gray, Alexander

    1999-01-01

    We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.

  18. Benchmarking Stochastic Algorithms for Global Optimization Problems by Visualizing Confidence Intervals.

    PubMed

    Liu, Qunfeng; Chen, Wei-Neng; Deng, Jeremiah D; Gu, Tianlong; Zhang, Huaxiang; Yu, Zhengtao; Zhang, Jun

    2017-02-07

    The popular performance profiles and data profiles for benchmarking deterministic optimization algorithms are extended to benchmark stochastic algorithms for global optimization problems. A general confidence interval is employed to replace the significance test, which is popular in traditional benchmarking methods but suffering more and more criticisms. Through computing confidence bounds of the general confidence interval and visualizing them with performance profiles and (or) data profiles, our benchmarking method can be used to compare stochastic optimization algorithms by graphs. Compared with traditional benchmarking methods, our method is synthetic statistically and therefore is suitable for large sets of benchmark problems. Compared with some sample-mean-based benchmarking methods, e.g., the method adopted in black-box-optimization-benchmarking workshop/competition, our method considers not only sample means but also sample variances. The most important property of our method is that it is a distribution-free method, i.e., it does not depend on any distribution assumption of the population. This makes it a promising benchmarking method for stochastic optimization algorithms. Some examples are provided to illustrate how to use our method to compare stochastic optimization algorithms.

  19. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  20. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  1. Modeling of stochastic dynamics of time-dependent flows under high-dimensional random forcing

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Karniadakis, George

    2016-11-01

    In this numerical study the effect of high-dimensional stochastic forcing in time-dependent flows is investigated. To efficiently quantify the evolution of stochasticity in such a system, the dynamically orthogonal method is used. In this methodology, the solution is approximated by a generalized Karhunen-Loeve (KL) expansion in the form of u (x , t ω) = u ̲ (x , t) + ∑ i = 1 N yi (t ω)ui (x , t) , in which u ̲ (x , t) is the stochastic mean, the set of ui (x , t) 's is a deterministic orthogonal basis and yi (t ω) 's are the stochastic coefficients. Explicit evolution equations for u ̲ , ui and yi are formulated. The elements of the basis ui (x , t) 's remain orthogonal for all times and they evolve according to the system dynamics to capture the energetically dominant stochastic subspace. We consider two classical fluid dynamics problems: (1) flow over a cylinder, and (2) flow over an airfoil under up to one-hundred dimensional random forcing. We explore the interaction of intrinsic with extrinsic stochasticity in these flows. DARPA N66001-15-2-4055, Office of Naval Research N00014-14-1-0166.

  2. Markov Random Fields, Stochastic Quantization and Image Analysis

    DTIC Science & Technology

    1990-01-01

    Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.

  3. Dynamic response analysis of linear stochastic truss structures under stationary random excitation

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Chen, Jianjun; Cui, Mingtao; Cheng, Yi

    2005-03-01

    This paper presents a new method for the dynamic response analysis of linear stochastic truss structures under stationary random excitation. Considering the randomness of the structural physical parameters and geometric dimensions, the computational expressions of the mean value, variance and variation coefficient of the mean square value of the structural displacement and stress response under the stationary random excitation are developed by means of the random variable's functional moment method and the algebra synthesis method from the expressions of structural stationary random response of the frequency domain. The influences of the randomness of the structural physical parameters and geometric dimensions on the randomness of the mean square value of the structural displacement and stress response are inspected by the engineering examples.

  4. Measuring edge importance: a quantitative analysis of the stochastic shielding approximation for random processes on graphs.

    PubMed

    Schmidt, Deena R; Thomas, Peter J

    2014-04-17

    Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin-Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán's approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process.

  5. A stochastic model of randomly accelerated walkers for human mobility

    PubMed Central

    Gallotti, Riccardo; Bazzani, Armando; Rambaldi, Sandro; Barthelemy, Marc

    2016-01-01

    Recent studies of human mobility largely focus on displacements patterns and power law fits of empirical long-tailed distributions of distances are usually associated to scale-free superdiffusive random walks called Lévy flights. However, drawing conclusions about a complex system from a fit, without any further knowledge of the underlying dynamics, might lead to erroneous interpretations. Here we show, on the basis of a data set describing the trajectories of 780,000 private vehicles in Italy, that the Lévy flight model cannot explain the behaviour of travel times and speeds. We therefore introduce a class of accelerated random walks, validated by empirical observations, where the velocity changes due to acceleration kicks at random times. Combining this mechanism with an exponentially decaying distribution of travel times leads to a short-tailed distribution of distances which could indeed be mistaken with a truncated power law. These results illustrate the limits of purely descriptive models and provide a mechanistic view of mobility. PMID:27573984

  6. Recursive state estimation for discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks

    NASA Astrophysics Data System (ADS)

    Ding, Derui; Shen, Yuxuan; Song, Yan; Wang, Yongxiong

    2016-07-01

    This paper is concerned with the state estimation problem for a class of discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks. The stochastic nonlinearity described by statistical means which covers several classes of well-studied nonlinearities as special cases is taken into discussion. The randomly occurring deception attacks are modelled by a set of random variables obeying Bernoulli distributions with given probabilities. The purpose of the addressed state estimation problem is to design an estimator with hope to minimize the upper bound for estimation error covariance at each sampling instant. Such an upper bound is minimized by properly designing the estimator gain. The proposed estimation scheme in the form of two Riccati-like difference equations is of a recursive form. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed scheme.

  7. Stochastic modeling and vibration analysis of rotating beams considering geometric random fields

    NASA Astrophysics Data System (ADS)

    Choi, Chan Kyu; Yoo, Hong Hee

    2017-02-01

    Geometric parameters such as the thickness and width of a beam are random for various reasons including manufacturing tolerance and operation wear. Due to these random parameter properties, the vibration characteristics of the structure are also random. In this paper, we derive equations of motion to conduct stochastic vibration analysis of a rotating beam using the assumed mode method and stochastic spectral method. The accuracy of the proposed method is first verified by comparing analysis results to those obtained with Monte-Carlo simulation (MCS). The efficiency of the proposed method is then compared to that of MCS. Finally, probability densities of various modal and transient response characteristics of rotating beams are obtained with the proposed method.

  8. Random-walk-based stochastic modeling of three-dimensional fiber systems.

    PubMed

    Altendorf, Hellen; Jeulin, Dominique

    2011-04-01

    For the simulation of fiber systems, there exist several stochastic models: systems of straight nonoverlapping fibers, systems of overlapping bending fibers, or fiber systems created by sedimentation. However, there is a lack of models providing dense, nonoverlapping fiber systems with a given random orientation distribution and a controllable level of bending. We introduce a new stochastic model in this paper that generalizes the force-biased packing approach to fibers represented as chains of balls. The starting configuration is modeled using random walks, where two parameters in the multivariate von Mises-Fisher orientation distribution control the bending. The points of the random walk are associated with a radius and the current orientation. The resulting chains of balls are interpreted as fibers. The final fiber configuration is obtained as an equilibrium between repulsion forces avoiding crossing fibers and recover forces ensuring the fiber structure. This approach provides high volume fractions up to 72.0075%.

  9. Collision-Resolution Algorithms and Random-Access Communications.

    DTIC Science & Technology

    1980-04-01

    DOCUMENTATION PAGE BEFORE COMPLETING FORM 2GOVT ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER S. TYPJLCI REPORT IS PUMAERED -- COMMNIC ~fus S.PERFORMING...performance of random-access algorithms that in- corporate these algorithms. The first and most important of these is the con- ditional mean CRI

  10. A stochastic disaggregation algorithm for analysis of change in the sub-daily extreme rainfall

    NASA Astrophysics Data System (ADS)

    Nazemi, Ali; Elshorbagy, Amin

    2014-05-01

    The statistical characteristics of local extreme rainfall, particularly at shorter durations, are among the key design parameters for urban storm water collection systems. Recent observations have provided sufficient evidence that the ongoing climate change alters form, pattern, intensity and frequency of precipitation across various temporal and spatial scales. Quantifying and predicting the resulted changes in the extremes, however, remains as a challenging problem, especially for local and shorter duration events. Most importantly, climate models are still unable to produce the extreme rainfall events at global and regional scales. In addition, current simulations of climate models are at much coarser temporal and spatial resolutions than can be readily used in local design applications. Spatial and temporal downscaling methods, therefore, are necessary to bring the climate model simulations into finer scales. To tackle the temporal downscaling problem, we propose a stochastic algorithm, based on the novel notion of Rainfall Distribution Functions (RDFs), to disaggregate the daily rainfall into hourly estimates. In brief, RDFs describe how the historical daily rainfall totals are distributed into hourly segments. By having a set of RDFs, an empirical probability distribution function can be constructed to describe the proportions of daily cumulative rainfall at each hourly time step. These hour-by-hour empirical distribution functions can be used for random generation of hourly rainfall given total daily values. We used this algorithm for disaggregating the daily spring and summer rainfalls in the city of Saskatoon, Saskatchewan, Canada and tested the performance of the disaggregation with respect to reproduction of extremes. In particular, the Intensity-Duration-Frequency (IDF) curves generated based on both historical and reconstructed extremes are compared. The proposed disaggregation scheme is further plugged into an existing daily rainfall generator to

  11. A Combined Criterion for Existence and Continuity of Random Attractors for Stochastic Lattice Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Gu, Anhui; Li, Yangrong

    The paper is devoted to establishing a combination of sufficient criterion for the existence and upper semi-continuity of random attractors for stochastic lattice dynamical systems. By relying on a family of random systems itself, we first set up the abstract result when it is convergent, uniformly absorbing and uniformly random when asymptotically null in the phase space. Then we apply the results to the second-order lattice dynamical system driven by multiplicative white noise. It is indicated that the criterion depending on the dynamical system itself seems more applicable than the existing ones to lattice differential models.

  12. A New Continuous Rotation IMU Alignment Algorithm Based on Stochastic Modeling for Cost Effective North-Finding Applications.

    PubMed

    Li, Yun; Wu, Wenqi; Jiang, Qingan; Wang, Jinling

    2016-12-13

    Based on stochastic modeling of Coriolis vibration gyros by the Allan variance technique, this paper discusses Angle Random Walk (ARW), Rate Random Walk (RRW) and Markov process gyroscope noises which have significant impacts on the North-finding accuracy. A new continuous rotation alignment algorithm for a Coriolis vibration gyroscope Inertial Measurement Unit (IMU) is proposed in this paper, in which the extended observation equations are used for the Kalman filter to enhance the estimation of gyro drift errors, thus improving the north-finding accuracy. Theoretical and numerical comparisons between the proposed algorithm and the traditional ones are presented. The experimental results show that the new continuous rotation alignment algorithm using the extended observation equations in the Kalman filter is more efficient than the traditional two-position alignment method. Using Coriolis vibration gyros with bias instability of 0.1°/h, a north-finding accuracy of 0.1° (1σ) is achieved by the new continuous rotation alignment algorithm, compared with 0.6° (1σ) north-finding accuracy for the two-position alignment and 1° (1σ) for the fixed-position alignment.

  13. A New Continuous Rotation IMU Alignment Algorithm Based on Stochastic Modeling for Cost Effective North-Finding Applications

    PubMed Central

    Li, Yun; Wu, Wenqi; Jiang, Qingan; Wang, Jinling

    2016-01-01

    Based on stochastic modeling of Coriolis vibration gyros by the Allan variance technique, this paper discusses Angle Random Walk (ARW), Rate Random Walk (RRW) and Markov process gyroscope noises which have significant impacts on the North-finding accuracy. A new continuous rotation alignment algorithm for a Coriolis vibration gyroscope Inertial Measurement Unit (IMU) is proposed in this paper, in which the extended observation equations are used for the Kalman filter to enhance the estimation of gyro drift errors, thus improving the north-finding accuracy. Theoretical and numerical comparisons between the proposed algorithm and the traditional ones are presented. The experimental results show that the new continuous rotation alignment algorithm using the extended observation equations in the Kalman filter is more efficient than the traditional two-position alignment method. Using Coriolis vibration gyros with bias instability of 0.1°/h, a north-finding accuracy of 0.1° (1σ) is achieved by the new continuous rotation alignment algorithm, compared with 0.6° (1σ) north-finding accuracy for the two-position alignment and 1° (1σ) for the fixed-position alignment. PMID:27983585

  14. Validation of an algorithm for delay stochastic simulation of transcription and translation in prokaryotic gene expression

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.; Zhu, Rui

    2006-12-01

    The quantitative modeling of gene transcription and translation requires a treatment of two key features: stochastic fluctuations due to the limited copy numbers of key molecules (genes, RNA polymerases, ribosomes), and delayed output due to the time required for biopolymer synthesis. Recently proposed algorithms allow for efficient simulations of such systems. However, it is critical to know whether the results of delay stochastic simulations agree with those from more detailed models of the transcription and translation processes. We present a generalization of previous delay stochastic simulation algorithms which allows both for multiple delays and for distributions of delay times. We show that delay stochastic simulations closely approximate simulations of a detailed transcription model except when two-body effects (e.g. collisions between polymerases on a template strand) are important. Finally, we study a delay stochastic model of prokaryotic transcription and translation which reproduces observations from a recent experimental study in which a single gene was expressed under the control of a repressed lac promoter in E. coli cells. This demonstrates our ability to quantitatively model gene expression using these new methods.

  15. Stochastic responses of a viscoelastic-impact system under additive and multiplicative random excitations

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangrong; Xu, Wei; Yang, Yongge; Wang, Xiying

    2016-06-01

    This paper deals with the stochastic responses of a viscoelastic-impact system under additive and multiplicative random excitations. The viscoelastic force is replaced by a combination of stiffness and damping terms. The non-smooth transformation of the state variables is utilized to transform the original system to a new system without the impact term. The stochastic averaging method is applied to yield the stationary probability density functions. The validity of the analytical method is verified by comparing the analytical results with the numerical results. It is invaluable to note that the restitution coefficient, the viscoelastic parameters and the damping coefficients can induce the occurrence of stochastic P-bifurcation. Furthermore, the joint stationary probability density functions with three peaks are explored.

  16. Hybrid discrete/continuum algorithms for stochastic reaction networks

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; ...

    2014-10-22

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discretemore » and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.« less

  17. Hybrid discrete/continuum algorithms for stochastic reaction networks

    SciTech Connect

    Safta, Cosmin Sargsyan, Khachik Debusschere, Bert Najm, Habib N.

    2015-01-15

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker–Planck equation. The Fokker–Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components. The numerical construction at the interface between the discrete and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. The performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.

  18. Hybrid discrete/continuum algorithms for stochastic reaction networks

    SciTech Connect

    Safta, Cosmin; Sargsyan, Khachik; Debusschere, Bert; Najm, Habib N.

    2014-10-22

    Direct solutions of the Chemical Master Equation (CME) governing Stochastic Reaction Networks (SRNs) are generally prohibitively expensive due to excessive numbers of possible discrete states in such systems. To enhance computational efficiency we develop a hybrid approach where the evolution of states with low molecule counts is treated with the discrete CME model while that of states with large molecule counts is modeled by the continuum Fokker-Planck equation. The Fokker-Planck equation is discretized using a 2nd order finite volume approach with appropriate treatment of flux components to avoid negative probability values. The numerical construction at the interface between the discrete and continuum regions implements the transfer of probability reaction by reaction according to the stoichiometry of the system. As a result, the performance of this novel hybrid approach is explored for a two-species circadian model with computational efficiency gains of about one order of magnitude.

  19. The analyses of dynamic response and reliability of fuzzy-random truss under stationary stochastic excitation

    NASA Astrophysics Data System (ADS)

    Ma, Juan; Gao, Wei; Wriggers, Peter; Wu, Tao; Sahraee, Shahab

    2010-04-01

    A new two-factor method based on the probability and the fuzzy sets theory is used for the analyses of the dynamic response and reliability of fuzzy-random truss systems under the stationary stochastic excitation. Considering the fuzzy-randomness of the structural physical parameters and geometric dimensions simultaneously, the fuzzy-random correlation function matrix of structural displacement response in time domain and the fuzzy-random mean square values of structural dynamic response in frequency domain are developed by using the two-factor method, and the fuzzy numerical characteristics of dynamic responses are then derived. Based on numerical characteristics of structural fuzzy-random dynamic responses, the structural fuzzy-random dynamic reliability and its fuzzy numerical characteristic are obtained from the Poisson equation. The effects of the uncertainty of the structural parameters on structural dynamic response and reliability are illustrated via two engineering examples and some important conclusions are obtained.

  20. A stochastic model and Monte Carlo algorithm for fluctuation-induced H2 formation on the surface of interstellar dust grains

    NASA Astrophysics Data System (ADS)

    Sabelfeld, K. K.

    2015-09-01

    A stochastic algorithm for simulation of fluctuation-induced kinetics of H2 formation on grain surfaces is suggested as a generalization of the technique developed in our recent studies [1] where this method was developed to describe the annihilation of spatially separate electrons and holes in a disordered semiconductor. The stochastic model is based on the spatially inhomogeneous, nonlinear integro-differential Smoluchowski equations with random source term. In this paper we derive the general system of Smoluchowski type equations for the formation of H2 from two hydrogen atoms on the surface of interstellar dust grains with physisorption and chemisorption sites. We focus in this study on the spatial distribution, and numerically investigate the segregation in the case of a source with a continuous generation in time and randomly distributed in space. The stochastic particle method presented is based on a probabilistic interpretation of the underlying process as a stochastic Markov process of interacting particle system in discrete but randomly progressed time instances. The segregation is analyzed through the correlation analysis of the vector random field of concentrations which appears to be isotropic in space and stationary in time.

  1. A Randomized Approximate Nearest Neighbors Algorithm

    DTIC Science & Technology

    2010-09-14

    Introduction to Harmonic Analysis, Second edition, Dover Publi- cations (1976). [12] D. Knuth , Seminumerical Algorithms, vol. 2 of The Art of Computer ...ES) Yale University ,Department of Computer Science,New Haven,CT,06520 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY...may further assume that t > a2 and evaluate the cdf of D−a at t by computing the probability of D−a being smaller than t to obtain FD−a (t) = ∫ t a2

  2. On efficient randomized algorithms for finding the PageRank vector

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Dmitriev, D. Yu.

    2015-03-01

    Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.

  3. A stochastic learning algorithm for layered neural networks

    SciTech Connect

    Bartlett, E.B.; Uhrig, R.E.

    1992-12-31

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given.

  4. Stochastic diffusion and Kolmogorov entropy in regular and random Hamiltonians

    SciTech Connect

    Isichenko, M.B. . Inst. for Fusion Studies Kurchatov Inst. of Atomic Energy, Moscow ); Horton, W. . Inst. for Fusion Studies); Kim, D.E.; Heo, E.G.; Choi, D.I. )

    1992-05-01

    The scalings of the E x B turbulent diffusion coefficient D and the Kolmogorov entropy K with the potential amplitude {phi} {sup {approximately}} of the fluctuation are studied using the geometrical analysis of closed and extended particle orbits for several types of drift Hamiltonians. The high-amplitude scalings , D {proportional to} {phi} {sup {approximately} 2} or {phi} {sup {approximately} 0} and K {proportional to} log {phi} {sup {approximately}}, are shown to arise from different forms of a periodic (four-wave) Hamiltonian {phi}{sup {approximately}} (x,y,t), thereby explaining the controversy in earlier numerical results. For a quasi-random (six-wave) Hamiltonian numerical data for the diffusion D {proportional to} {phi} {sup {approximately} 0.92 {plus minus} 0.04} and the Kolmogorov entropy K {proportional to} {phi} {sup {approximately} 0.56 {plus minus} 0.17} are presented and compared with the percolation theory predictions D {sub p} {proportional to} {phi} {sup {approximately} 0.7}, K {sub p} {proportional to} {phi} {sup {approximately} 0.5}. To study the turbulent diffusion in a general form of Hamiltonian, a new approach of the series expansion of the Lagrangian velocity correlation function is proposed and discussed.

  5. Stochastic diffusion and Kolmogorov entropy in regular and random Hamiltonians

    SciTech Connect

    Isichenko, M.B. |; Horton, W.; Kim, D.E.; Heo, E.G.; Choi, D.I.

    1992-05-01

    The scalings of the E x B turbulent diffusion coefficient D and the Kolmogorov entropy K with the potential amplitude {phi} {sup {approximately}} of the fluctuation are studied using the geometrical analysis of closed and extended particle orbits for several types of drift Hamiltonians. The high-amplitude scalings , D {proportional_to} {phi} {sup {approximately} 2} or {phi} {sup {approximately} 0} and K {proportional_to} log {phi} {sup {approximately}}, are shown to arise from different forms of a periodic (four-wave) Hamiltonian {phi}{sup {approximately}} (x,y,t), thereby explaining the controversy in earlier numerical results. For a quasi-random (six-wave) Hamiltonian numerical data for the diffusion D {proportional_to} {phi} {sup {approximately} 0.92 {plus_minus} 0.04} and the Kolmogorov entropy K {proportional_to} {phi} {sup {approximately} 0.56 {plus_minus} 0.17} are presented and compared with the percolation theory predictions D {sub p} {proportional_to} {phi} {sup {approximately} 0.7}, K {sub p} {proportional_to} {phi} {sup {approximately} 0.5}. To study the turbulent diffusion in a general form of Hamiltonian, a new approach of the series expansion of the Lagrangian velocity correlation function is proposed and discussed.

  6. Stochastic Perturbations and Invariant Measures of Position Dependent Random Maps via Fourier Approximations

    NASA Astrophysics Data System (ADS)

    Islam, Md Shafiqul

    Let T = {τ1(x), τ2(x),…, τK(x); p1(x), p2(x),…, pK(x)} be a position dependent random map which possesses a unique absolutely continuous invariant measure \\hat{μ} with probability density function \\hat{f}. We consider a family {TN}N≥1 of stochastic perturbations TN of the random map T. Each TN is a Markov process with the transition density ∑ {k = 1}K pk(x) qN(τ k(x), \\cdot), where qN(x, \\sdot) is a doubly stochastic periodic and separable kernel. Using Fourier approximation, we construct a finite dimensional approximation PN to a perturbed Perron-Frobenius operator. Let fN* be a fixed point of PN. We show that {fN*} converges in L1 to \\hat{f}.

  7. Random vibration of nonlinear beams by the new stochastic linearization technique

    NASA Technical Reports Server (NTRS)

    Fang, J.

    1994-01-01

    In this paper, the beam under general time dependent stationary random excitation is investigated, when exact solution is unavailable. Numerical simulations are carried out to compare its results with those yielded by the conventional linearization techniques. It is found that the modified version of the stochastic linearization technique yields considerably more accurate results for the mean square displacement of the beam than the conventional equivalent linearization technique, especially in the case of large nonlinearity.

  8. Stochastic interference of fluorescence radiation in random media with large inhomogeneities

    NASA Astrophysics Data System (ADS)

    Zimnyakov, D. A.; Asharchuk, I. A.; Yuvchenko, S. A.; Sviridov, A. P.

    2017-03-01

    Stochastic interference of fluorescence light outgoing from a dye-doped coarse-grained random medium, which was pumped by the continuous-wave laser radiation, was experimentally studied. It was found that the contrast of random interference patterns highly correlates with the wavelength-dependent fluorescence intensity and reaches its minimum in the vicinity of the cusp of emission spectrum. The decay in the contrast of spectrally selected speckle patterns was interpreted in terms of the pathlength distribution broadening for fluorescence radiation propagating in the medium. This broadening is presumably caused by the wavelength-dependent negative absorption of the medium.

  9. Research on machine learning framework based on random forest algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Qiong; Cheng, Hui; Han, Hai

    2017-03-01

    With the continuous development of machine learning, industry and academia have released a lot of machine learning frameworks based on distributed computing platform, and have been widely used. However, the existing framework of machine learning is limited by the limitations of machine learning algorithm itself, such as the choice of parameters and the interference of noises, the high using threshold and so on. This paper introduces the research background of machine learning framework, and combined with the commonly used random forest algorithm in machine learning classification algorithm, puts forward the research objectives and content, proposes an improved adaptive random forest algorithm (referred to as ARF), and on the basis of ARF, designs and implements the machine learning framework.

  10. System and Method for Tracking Vehicles Using Random Search Algorithms.

    DTIC Science & Technology

    1997-01-31

    patent application is available for licensing. Requests for information should be addressed to: OFFICE OF NAVAL RESEARCH DEPARTMENT OF THE NAVY...relates to a system and a method for 22 tracking vehicles using random search algorithm methodolgies . 23 (2) Description of the Prior Art 24 Contact...algorithm methodologies for finding peaks in non-linear 14 functions. U.S. Patent No. 5,148,513 to Koza et al., for 15 example, relates to a non-linear

  11. The diffusive finite state projection algorithm for efficient simulation of the stochastic reaction-diffusion master equation

    PubMed Central

    Drawert, Brian; Lawson, Michael J.; Petzold, Linda; Khammash, Mustafa

    2010-01-01

    We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm. PMID:20170209

  12. On the relationship between Gaussian stochastic blockmodels and label propagation algorithms

    NASA Astrophysics Data System (ADS)

    Zhang, Junhao; Chen, Tongfei; Hu, Junfeng

    2015-03-01

    The problem of community detection has received great attention in recent years. Many methods have been proposed to discover communities in networks. In this paper, we propose a Gaussian stochastic blockmodel that uses Gaussian distributions to fit weight of edges in networks for non-overlapping community detection. The maximum likelihood estimation of this model has the same objective function as general label propagation with node preference. The node preference of a specific vertex turns out to be a value proportional to the intra-community eigenvector centrality (the corresponding entry in principal eigenvector of the adjacency matrix of the subgraph inside that vertex's community) under maximum likelihood estimation. Additionally, the maximum likelihood estimation of a constrained version of our model is highly related to another extension of the label propagation algorithm, namely, the label propagation algorithm under constraint. Experiments show that the proposed Gaussian stochastic blockmodel performs well on various benchmark networks.

  13. Measuring Edge Importance: A Quantitative Analysis of the Stochastic Shielding Approximation for Random Processes on Graphs

    PubMed Central

    2014-01-01

    Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin–Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán’s approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process. PMID:24742077

  14. On stochastic FEM based computational homogenization of magneto-active heterogeneous materials with random microstructure

    NASA Astrophysics Data System (ADS)

    Pivovarov, Dmytro; Steinmann, Paul

    2016-12-01

    In the current work we apply the stochastic version of the FEM to the homogenization of magneto-elastic heterogeneous materials with random microstructure. The main aim of this study is to capture accurately the discontinuities appearing at matrix-inclusion interfaces. We demonstrate and compare three different techniques proposed in the literature for the purely mechanical problem, i.e. global, local and enriched stochastic basis functions. Moreover, we demonstrate the implementation of the isoparametric concept in the enlarged physical-stochastic product space. The Gauss integration rule in this multidimensional space is discussed. In order to design a realistic stochastic Representative Volume Element we analyze actual scans obtained by electron microscopy and provide numerical studies of the micro particle distribution. The SFEM framework described in our previous work (Pivovarov and Steinmann in Comput Mech 57(1): 123-147, 2016) is extended to the case of the magneto-elastic materials. To this end, the magneto-elastic energy function is used, and the corresponding hyper-tensors of the magneto-elastic problem are introduced. In order to estimate the methods' accuracy we performed a set of simulations for elastic and magneto-elastic problems using three different SFEM modifications. All results are compared with "brute-force" Monte-Carlo simulations used as reference solution.

  15. Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel

    2011-09-01

    The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.

  16. Universal stochastic series expansion algorithm for Heisenberg model and Bose-Hubbard model with interaction.

    PubMed

    Zyubin, M V; Kashurnikov, V A

    2004-03-01

    We propose a universal stochastic series expansion (SSE) method for the simulation of the Heisenberg model with arbitrary spin and the Bose-Hubbard model with interaction. We report the calculations involving soft-core bosons with interaction by the SSE method. Moreover, we develop a simple procedure for increased efficiency of the algorithm. From calculation of integrated autocorrelation times we conclude that the method is efficient for both models and essentially eliminates the critical slowing down problem.

  17. Adaptive and Distributed Algorithms for Vehicle Routing in a Stochastic and Dynamic Environment

    DTIC Science & Technology

    2010-11-18

    stochastic and dynamic vehicle routing problems,” PhD Thesis, Dept. of Civil and Environmental Engineering , Massachusetts Institute of Technology ... Technology (MIT), Cam- bridge, in 2001. From 2001 to 2004, he was an Assistant Professor of aerospace engineering at the University of Illinois at Urbana...system. The general problem is known as the m-vehicle Dynamic Traveling Repairman Problem (m-DTRP). The best previously known con- trol algorithms rely on

  18. Analytic and Algorithmic Solution of Random Satisfiability Problems

    NASA Astrophysics Data System (ADS)

    Mézard, M.; Parisi, G.; Zecchina, R.

    2002-08-01

    We study the satisfiability of random Boolean expressions built from many clauses with K variables per clause (K-satisfiability). Expressions with a ratio α of clauses to variables less than a threshold αc are almost always satisfiable, whereas those with a ratio above this threshold are almost always unsatisfiable. We show the existence of an intermediate phase below αc, where the proliferation of metastable states is responsible for the onset of complexity in search algorithms. We introduce a class of optimization algorithms that can deal with these metastable states; one such algorithm has been tested successfully on the largest existing benchmark of K-satisfiability.

  19. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.

  20. Genetic Algorithm and Tabu Search for Vehicle Routing Problems with Stochastic Demand

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Irhamah

    2010-11-01

    This paper presents a problem of designing solid waste collection routes, involving scheduling of vehicles where each vehicle begins at the depot, visits customers and ends at the depot. It is modeled as a Vehicle Routing Problem with Stochastic Demands (VRPSD). A data set from a real world problem (a case) is used in this research. We developed Genetic Algorithm (GA) and Tabu Search (TS) procedure and these has produced the best possible result. The problem data are inspired by real case of VRPSD in waste collection. Results from the experiment show the advantages of the proposed algorithm that are its robustness and better solution qualities.

  1. SOS! An algorithm and software for the stochastic optimization of stimuli.

    PubMed

    Armstrong, Blair C; Watson, Christine E; Plaut, David C

    2012-09-01

    The characteristics of the stimuli used in an experiment critically determine the theoretical questions the experiment can address. Yet there is relatively little methodological support for selecting optimal sets of items, and most researchers still carry out this process by hand. In this research, we present SOS, an algorithm and software package for the stochastic optimization of stimuli. SOS takes its inspiration from a simple manual stimulus selection heuristic that has been formalized and refined as a stochastic relaxation search. The algorithm rapidly and reliably selects a subset of possible stimuli that optimally satisfy the constraints imposed by an experimenter. This allows the experimenter to focus on selecting an optimization problem that suits his or her theoretical question and to avoid the tedious task of manually selecting stimuli. We detail how this optimization algorithm, combined with a vocabulary of constraints that define optimal sets, allows for the quick and rigorous assessment and maximization of the internal and external validity of experimental items. In doing so, the algorithm facilitates research using factorial, multiple/mixed-effects regression, and other experimental designs. We demonstrate the use of SOS with a case study and discuss other research situations that could benefit from this tool. Support for the generality of the algorithm is demonstrated through Monte Carlo simulations on a range of optimization problems faced by psychologists. The software implementation of SOS and a user manual are provided free of charge for academic purposes as precompiled binaries and MATLAB source files at http://sos.cnbc.cmu.edu.

  2. Relationship between clustering and algorithmic phase transitions in the random k-XORSAT model and its NP-complete extensions

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Monasson, R.; Zamponi, F.

    2008-01-01

    We study the performances of stochastic heuristic search algorithms on Uniquely Extendible Constraint Satisfaction Problems with random inputs. We show that, for any heuristic preserving the Poissonian nature of the underlying instance, the (heuristic-dependent) largest ratio αa of constraints per variables for which a search algorithm is likely to find solutions is smaller than the critical ratio αd above which solutions are clustered and highly correlated. In addition we show that the clustering ratio can be reached when the number k of variables per constraints goes to infinity by the so-called Generalized Unit Clause heuristic.

  3. Minimal representation of matrix valued white stochastic processes and U-D factorisation of algorithms for optimal control

    NASA Astrophysics Data System (ADS)

    Van Willigenburg, L. Gerard; De Koning, Willem L.

    2013-02-01

    Two different descriptions are used in the literature to formulate the optimal dynamic output feedback control problem for linear dynamical systems with white stochastic parameters and quadratic criteria, called the optimal compensation problem. One describes the matrix valued white stochastic processes involved, using a sum of deterministic matrices each one multiplied by a scalar stochastic process that is independent of the others. Another, that is more general and concise, uses Kronecker products instead. This article relates the statistics of both descriptions and shows their advantages and disadvantages. As to the first description, an important result that comes out is the minimum number of matrices multiplied by scalar, independent, stochastic processes needed to represent a certain matrix valued white stochastic process, together with an associated minimal representation. As to the second description, an important result concerns the generation of all Kronecker products that represent relevant statistics. Both results facilitate the specification of statistics of systems with white stochastic parameters. The second part of this article further exploits these results to perform an U-D factorisation of an algorithm to compute optimal dynamic output feedback controllers (optimal compensators) for linear discrete-time systems with white stochastic parameters and quadratic sum criteria. U-D factorisation of this type of algorithm is new. By solving several numerical examples, the U-D factored algorithm is compared with a conventional algorithm.

  4. Monotonic continuous-time random walks with drift and stochastic reset events

    NASA Astrophysics Data System (ADS)

    Montero, Miquel; Villarroel, Javier

    2013-01-01

    In this paper we consider a stochastic process that may experience random reset events which suddenly bring the system to the starting value and analyze the relevant statistical magnitudes. We focus our attention on monotonic continuous-time random walks with a constant drift: The process increases between the reset events, either by the effect of the random jumps, or by the action of the deterministic drift. As a result of all these combined factors interesting properties emerge, like the existence (for any drift strength) of a stationary transition probability density function, or the faculty of the model to reproduce power-law-like behavior. General formulas for two extreme statistics, the survival probability, and the mean exit time are also derived. To corroborate in an independent way the results of the paper, Monte Carlo methods were used. These numerical estimations are in full agreement with the analytical predictions.

  5. Bridges for Pedestrians with Random Parameters using the Stochastic Finite Elements Analysis

    NASA Astrophysics Data System (ADS)

    Szafran, J.; Kamiński, M.

    2017-02-01

    The main aim of this paper is to present a Stochastic Finite Element Method analysis with reference to principal design parameters of bridges for pedestrians: eigenfrequency and deflection of bridge span. They are considered with respect to random thickness of plates in boxed-section bridge platform, Young modulus of structural steel and static load resulting from crowd of pedestrians. The influence of the quality of the numerical model in the context of traditional FEM is shown also on the example of a simple steel shield. Steel structures with random parameters are discretized in exactly the same way as for the needs of traditional Finite Element Method. Its probabilistic version is provided thanks to the Response Function Method, where several numerical tests with random parameter values varying around its mean value enable the determination of the structural response and, thanks to the Least Squares Method, its final probabilistic moments.

  6. A non-stochastic Coulomb collision algorithm for particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacon, Luis

    2016-10-01

    Coulomb collision modules in PIC simulations are typically Monte-Carlo-based. Monte Carlo is attractive for its simplicity, efficiency in high dimensions, and conservation properties. However, it is noisy, of low temporal order (typically O(√{ Δt }), and has to resolve the collision frequency for accuracy. In this study, we explore a non-stochastic, multiscale alternative to Monte Carlo for PIC. The approach is based on a Green-function-based reformulation of the Vlasov-Fokker-Planck equation, which can be readily incorporated in modern multiscale collisionless PIC algorithms. An asymptotic-preserving operator splitting approach allows the collisional step to be treated independently from the particles while preserving the multiscale character of the method. A significant element of novelty in our algorithm is the use of a machine learning algorithm that avoid a velocity space mesh for the collision step. The resulting algorithm is non-stochastic and first-order-accurate in time. We will demonstrate the method with several relaxation examples.

  7. The Separatrix Algorithm for synthesis and analysis of stochastic simulations with applications in disease modeling.

    PubMed

    Klein, Daniel J; Baym, Michael; Eckhoff, Philip

    2014-01-01

    Decision makers in epidemiology and other disciplines are faced with the daunting challenge of designing interventions that will be successful with high probability and robust against a multitude of uncertainties. To facilitate the decision making process in the context of a goal-oriented objective (e.g., eradicate polio by [Formula: see text]), stochastic models can be used to map the probability of achieving the goal as a function of parameters. Each run of a stochastic model can be viewed as a Bernoulli trial in which "success" is returned if and only if the goal is achieved in simulation. However, each run can take a significant amount of time to complete, and many replicates are required to characterize each point in parameter space, so specialized algorithms are required to locate desirable interventions. To address this need, we present the Separatrix Algorithm, which strategically locates parameter combinations that are expected to achieve the goal with a user-specified probability of success (e.g. 95%). Technically, the algorithm iteratively combines density-corrected binary kernel regression with a novel information-gathering experiment design to produce results that are asymptotically correct and work well in practice. The Separatrix Algorithm is demonstrated on several test problems, and on a detailed individual-based simulation of malaria.

  8. Single realization stochastic FDTD for weak scattering waves in biological random media.

    PubMed

    Tan, Tengmeng; Taflove, Allen; Backman, Vadim

    2013-02-01

    This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.

  9. Dynamics of the stochastic Leslie-Gower predator-prey system with randomized intrinsic growth rate

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli; Yuan, Sanling

    2016-11-01

    This paper investigates the stochastic Leslie-Gower predator-prey system with randomized intrinsic growth rate. Existence of a unique global positive solution is proved firstly. Then we obtain the sufficient conditions for permanence in mean and almost sure extinction of the system. Furthermore, the stationary distribution is derived based on the positive equilibrium of the deterministic model, which shows the population is not only persistent but also convergent by time average under some assumptions. Finally, we illustrate our conclusions through two examples.

  10. Stochastic resonance in a fractional harmonic oscillator subject to random mass and signal-modulated noise

    NASA Astrophysics Data System (ADS)

    Guo, Feng; Zhu, Cheng-Yin; Cheng, Xiao-Feng; Li, Heng

    2016-10-01

    Stochastic resonance in a fractional harmonic oscillator with random mass and signal-modulated noise is investigated. Applying linear system theory and the characteristics of the noises, the analysis expression of the mean output-amplitude-gain (OAG) is obtained. It is shown that the OAG varies non-monotonically with the increase of the intensity of the multiplicative dichotomous noise, with the increase of the frequency of the driving force, as well as with the increase of the system frequency. In addition, the OAG is a non-monotonic function of the system friction coefficient, as a function of the viscous damping coefficient, as a function of the fractional exponent.

  11. MOQA min-max heapify: A randomness preserving algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ang; Hennessy, Aoife; Schellekens, Michel

    2012-09-01

    MOQA is a high-level data structuring language, designed to allow for modular static timing analysis [1, 2, 3]. In essence,MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The modularity property brings a strong advantage for the programmer. The capacity to combine parts of code, where the average-time is simply the sum of the times of the parts, is a very helpful advantage in static analysis, something which is not available in current languages. Modularity also improves precision of average-case analysis, supporting the determination of accurate estimates on the average number of basic operations ofMOQA programs. The mathematical theory underpinning this approach is that of random structures and their preservation. Applying any MOQA operation to all elements of a random structure results in an output isomorphic to one or more random structures, which is the key to systematic timing. Here we introduce the approach in a self contained way and provide a MOQA version of the well-known algorithm of Min-Max heapify, constructed with the MOQA product operation. We demonstrate the "randomness preservation" property of the algorithm and illustrate the applicability of our method by deriving the exact average time of the algorithm.

  12. Variational mean-field algorithm for efficient inference in large systems of stochastic differential equations.

    PubMed

    Vrettas, Michail D; Opper, Manfred; Cornford, Dan

    2015-01-01

    This work introduces a Gaussian variational mean-field approximation for inference in dynamical systems which can be modeled by ordinary stochastic differential equations. This new approach allows one to express the variational free energy as a functional of the marginal moments of the approximating Gaussian process. A restriction of the moment equations to piecewise polynomial functions, over time, dramatically reduces the complexity of approximate inference for stochastic differential equation models and makes it comparable to that of discrete time hidden Markov models. The algorithm is demonstrated on state and parameter estimation for nonlinear problems with up to 1000 dimensional state vectors and compares the results empirically with various well-known inference methodologies.

  13. Stochastic simulation for the propagation of high-frequency acoustic waves through a random velocity field

    NASA Astrophysics Data System (ADS)

    Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.

    2012-05-01

    In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.

  14. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    NASA Astrophysics Data System (ADS)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  15. R-leaping: accelerating the stochastic simulation algorithm by reaction leaps.

    PubMed

    Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros

    2006-08-28

    A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.

  16. R-leaping: Accelerating the stochastic simulation algorithm by reaction leaps

    NASA Astrophysics Data System (ADS)

    Auger, Anne; Chatelain, Philippe; Koumoutsakos, Petros

    2006-08-01

    A novel algorithm is proposed for the acceleration of the exact stochastic simulation algorithm by a predefined number of reaction firings (R-leaping) that may occur across several reaction channels. In the present approach, the numbers of reaction firings are correlated binomial distributions and the sampling procedure is independent of any permutation of the reaction channels. This enables the algorithm to efficiently handle large systems with disparate rates, providing substantial computational savings in certain cases. Several mechanisms for controlling the accuracy and the appearance of negative species are described. The advantages and drawbacks of R-leaping are assessed by simulations on a number of benchmark problems and the results are discussed in comparison with established methods.

  17. Application of stochastic weighted algorithms to a multidimensional silica particle model

    SciTech Connect

    Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.

  18. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  19. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    PubMed

    Komarov, Ivan; D'Souza, Roshan M

    2012-01-01

    The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  20. Expeditious Stochastic Calculation of Random-Phase Approximation Energies for Thousands of Electrons in Three Dimensions.

    PubMed

    Neuhauser, Daniel; Rabani, Eran; Baer, Roi

    2013-04-04

    A fast method is developed for calculating the random phase approximation (RPA) correlation energy for density functional theory. The correlation energy is given by a trace over a projected RPA response matrix, and the trace is taken by a stochastic approach using random perturbation vectors. For a fixed statistical error in the total energy per electron, the method scales, at most, quadratically with the system size; however, in practice, due to self-averaging, it requires less statistical sampling as the system grows, and the performance is close to linear scaling. We demonstrate the method by calculating the RPA correlation energy for cadmium selenide and silicon nanocrystals with over 1500 electrons. We find that the RPA correlation energies per electron are largely independent of the nanocrystal size. In addition, we show that a correlated sampling technique enables calculation of the energy difference between two slightly distorted configurations with scaling and a statistical error similar to that of the total energy per electron.

  1. A Bloch decomposition-based stochastic Galerkin method for quantum dynamics with a random external potential

    SciTech Connect

    Wu, Zhizhang Huang, Zhongyi

    2016-07-15

    In this paper, we consider the numerical solution of the one-dimensional Schrödinger equation with a periodic lattice potential and a random external potential. This is an important model in solid state physics where the randomness results from complicated phenomena that are not exactly known. Here we generalize the Bloch decomposition-based time-splitting pseudospectral method to the stochastic setting using the generalized polynomial chaos with a Galerkin procedure so that the main effects of dispersion and periodic potential are still computed together. We prove that our method is unconditionally stable and numerical examples show that it has other nice properties and is more efficient than the traditional method. Finally, we give some numerical evidence for the well-known phenomenon of Anderson localization.

  2. A method to dynamic stochastic multicriteria decision making with log-normally distributed random variables.

    PubMed

    Wang, Xin-Fan; Wang, Jian-Qiang; Deng, Sheng-Yue

    2013-01-01

    We investigate the dynamic stochastic multicriteria decision making (SMCDM) problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG) operator and the dynamic log-normal distribution weighted geometric (DLNDWG) operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.

  3. Fractal and stochastic geometry inference for breast cancer: a case study with random fractal models and Quermass-interaction process.

    PubMed

    Hermann, Philipp; Mrkvička, Tomáš; Mattfeldt, Torsten; Minárová, Mária; Helisová, Kateřina; Nicolis, Orietta; Wartner, Fabian; Stehlík, Milan

    2015-08-15

    Fractals are models of natural processes with many applications in medicine. The recent studies in medicine show that fractals can be applied for cancer detection and the description of pathological architecture of tumors. This fact is not surprising, as due to the irregular structure, cancerous cells can be interpreted as fractals. Inspired by Sierpinski carpet, we introduce a flexible parametric model of random carpets. Randomization is introduced by usage of binomial random variables. We provide an algorithm for estimation of parameters of the model and illustrate theoretical and practical issues in generation of Sierpinski gaskets and Hausdorff measure calculations. Stochastic geometry models can also serve as models for binary cancer images. Recently, a Boolean model was applied on the 200 images of mammary cancer tissue and 200 images of mastopathic tissue. Here, we describe the Quermass-interaction process, which can handle much more variations in the cancer data, and we apply it to the images. It was found out that mastopathic tissue deviates significantly stronger from Quermass-interaction process, which describes interactions among particles, than mammary cancer tissue does. The Quermass-interaction process serves as a model describing the tissue, which structure is broken to a certain level. However, random fractal model fits well for mastopathic tissue. We provide a novel discrimination method between mastopathic and mammary cancer tissue on the basis of complex wavelet-based self-similarity measure with classification rates more than 80%. Such similarity measure relates to Hurst exponent and fractional Brownian motions. The R package FractalParameterEstimation is developed and introduced in the paper.

  4. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    PubMed Central

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  5. A multi-sensor RSS spatial sensing-based robust stochastic optimization algorithm for enhanced wireless tethering.

    PubMed

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-12-12

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  6. A stochastic neuronal model predicts random search behaviors at multiple spatial scales in C. elegans

    PubMed Central

    Roberts, William M; Augustine, Steven B; Lawton, Kristy J; Lindsay, Theodore H; Thiele, Tod R; Izquierdo, Eduardo J; Faumont, Serge; Lindsay, Rebecca A; Britton, Matthew Cale; Pokala, Navin; Bargmann, Cornelia I; Lockery, Shawn R

    2016-01-01

    Random search is a behavioral strategy used by organisms from bacteria to humans to locate food that is randomly distributed and undetectable at a distance. We investigated this behavior in the nematode Caenorhabditis elegans, an organism with a small, well-described nervous system. Here we formulate a mathematical model of random search abstracted from the C. elegans connectome and fit to a large-scale kinematic analysis of C. elegans behavior at submicron resolution. The model predicts behavioral effects of neuronal ablations and genetic perturbations, as well as unexpected aspects of wild type behavior. The predictive success of the model indicates that random search in C. elegans can be understood in terms of a neuronal flip-flop circuit involving reciprocal inhibition between two populations of stochastic neurons. Our findings establish a unified theoretical framework for understanding C. elegans locomotion and a testable neuronal model of random search that can be applied to other organisms. DOI: http://dx.doi.org/10.7554/eLife.12572.001 PMID:26824391

  7. A biased random-key genetic algorithm for data clustering.

    PubMed

    Festa, P

    2013-09-01

    Cluster analysis aims at finding subsets (clusters) of a given set of entities, which are homogeneous and/or well separated. Starting from the 1990s, cluster analysis has been applied to several domains with numerous applications. It has emerged as one of the most exciting interdisciplinary fields, having benefited from concepts and theoretical results obtained by different scientific research communities, including genetics, biology, biochemistry, mathematics, and computer science. The last decade has brought several new algorithms, which are able to solve larger sized and real-world instances. We will give an overview of the main types of clustering and criteria for homogeneity or separation. Solution techniques are discussed, with special emphasis on the combinatorial optimization perspective, with the goal of providing conceptual insights and literature references to the broad community of clustering practitioners. A new biased random-key genetic algorithm is also described and compared with several efficient hybrid GRASP algorithms recently proposed to cluster biological data.

  8. Runtime analysis of an evolutionary algorithm for stochastic multi-objective combinatorial optimization.

    PubMed

    Gutjahr, Walter J

    2012-01-01

    For stochastic multi-objective combinatorial optimization (SMOCO) problems, the adaptive Pareto sampling (APS) framework has been proposed, which is based on sampling and on the solution of deterministic multi-objective subproblems. We show that when plugging in the well-known simple evolutionary multi-objective optimizer (SEMO) as a subprocedure into APS, ε-dominance has to be used to achieve fast convergence to the Pareto front. Two general theorems are presented indicating how runtime complexity results for APS can be derived from corresponding results for SEMO. This may be a starting point for the runtime analysis of evolutionary SMOCO algorithms.

  9. Stochastic models: theory and simulation.

    SciTech Connect

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  10. Using genetic algorithm to solve a new multi-period stochastic optimization model

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Li; Zhang, Ke-Cun

    2009-09-01

    This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.

  11. Application of stochastic weighted algorithms to a multidimensional silica particle model

    NASA Astrophysics Data System (ADS)

    Menz, William J.; Patterson, Robert I. A.; Wagner, Wolfgang; Kraft, Markus

    2013-09-01

    This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83-98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.

  12. 2D stochastic-integral models for characterizing random grain noise in titanium alloys

    SciTech Connect

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Cherry, Matthew; Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P.

    2014-02-18

    We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Loève (K-L) expansion for the random Euler angles, θ and φ, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.

  13. Random Matrix Approach to Quantum Adiabatic Evolution Algorithms

    NASA Technical Reports Server (NTRS)

    Boulatov, Alexei; Smelyanskiy, Vadier N.

    2004-01-01

    We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.

  14. Combinatorial approximation algorithms for MAXCUT using random walks.

    SciTech Connect

    Seshadhri, Comandur; Kale, Satyen

    2010-11-01

    We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weak local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.

  15. A stochastic model for adhesion-mediated cell random motility and haptotaxis.

    PubMed

    Dickinson, R B; Tranquillo, R T

    1993-01-01

    The active migration of blood and tissue cells is important in a number of physiological processes including inflammation, wound healing, embryogenesis, and tumor cell metastasis. These cells move by transmitting cytoplasmic force through membrane receptors which are bound specifically to adhesion ligands in the surrounding substratum. Recently, much research has focused on the influence of the composition of extracellular matrix and the distribution of its components on the speed and direction of cell migration. It is commonly believed that the magnitude of the adhesion influences cell speed and/or random turning behavior, whereas a gradient of adhesion may bias the net direction of the cell movement, a phenomenon known as haptotaxis. The mechanisms underlying these responses are presently not understood. A stochastic model is presented to provide a mechanistic understanding of how the magnitude and distribution of adhesion ligands in the substratum influence cell movement. The receptor-mediated cell migration is modeled as an interrelation of random processes on distinct time scales. Adhesion receptors undergo rapid binding and transport, resulting in a stochastic spatial distribution of bound receptors fluctuating about some mean distribution. This results in a fluctuating spatio-temporal pattern of forces on the cell, which in turn affects the speed and turning behavior on a longer time scale. The model equations are a system of nonlinear stochastic differential equations (SDE's) which govern the time evolution of the spatial distribution of bound and free receptors, and the orientation and position of the cell. These SDE's are integrated numerically to simulate the behavior of the model cell on both a uniform substratum, and on a gradient of adhesion ligand concentration. Furthermore, analysis of the governing SDE system and corresponding Fokker-Planck equation (FPE) yields analytical expressions for indices which characterize cell movement on multiple time

  16. Robust H∞ filtering for discrete nonlinear delayed stochastic systems with missing measurements and randomly occurring nonlinearities

    NASA Astrophysics Data System (ADS)

    Liu, Yurong; Alsaadi, Fuad E.; Yin, Xiaozhou; Wang, Yamin

    2015-02-01

    In this paper, we are concerned with the robust H∞ filtering problem for a class of nonlinear discrete time-delay stochastic systems. The system under consideration involves parameter uncertainties, stochastic disturbances, time-varying delays and sector nonlinearities. Both missing measurements and randomly occurring nonlinearities are described via the binary switching sequences satisfying a conditional probability distribution, and the nonlinearities are assumed to be sector bounded. The problem addressed is the design of a full-order filter such that, for all admissible uncertainties, nonlinearities and time-delays, the dynamics of the filtering error is constrained to be robustly exponentially stable in the mean square, and a prescribed ? disturbance rejection attenuation level is also guaranteed. By using the Lyapunov stability theory and some new techniques, sufficient conditions are first established to ensure the existence of the desired filtering parameters. Then, the explicit expression of the desired filter gains is described in terms of the solution to a linear matrix inequality. Finally, a numerical example is exploited to show the usefulness of the results derived.

  17. Dynamic Reconfiguration and Routing Algorithms for IP-Over-WDM Networks With Stochastic Traffic

    NASA Astrophysics Data System (ADS)

    Brzezinski, Andrew; Modiano, Eytan

    2005-10-01

    We develop algorithms for joint IP-layer routing and WDM logical topology reconfiguration in IP-over-WDM networks experiencing stochastic traffic. At the wavelenght division multiplexing (WDM) layer, we associate a nonnegligible overhead with WDM reconfiguration, during which time tuned transceivers cannot service backlogged data. The Internet Protocol (IP) layer is modeled as a queueing system. We demonstrate that the proposed algorithms achieve asymptotic throughput optimality by using frame-based maximum weight scheduling decisions. We study both fixed and variable frame durations. In addition to dynamically triggering WDM reconfiguration, our algorithms specify precisely how to route packets over the IP layer during the phases in which the WDM layer remains fixed. We demonstrate that optical-layer constraints do not affect the results, and provide an analysis of the specific case of WDM networks with multiple ports per node. In order to gauge the delay properties of our algorithms, we conduct a simulation study and demonstrate an important tradeoff between WDM reconfiguration and IP-layer routing. We find that multihop routing is extremely beneficial at low-throughput levels, while single-hop routing achieves improved delay at high-throughput levels. For a simple access network, we demonstrate through simulation the benefit of employing multihop IP-layer routes.

  18. A stochastic control approach to Slotted-ALOHA random access protocol

    NASA Astrophysics Data System (ADS)

    Pietrabissa, Antonio

    2013-12-01

    ALOHA random access protocols are distributed protocols based on transmission probabilities, that is, each node decides upon packet transmissions according to a transmission probability value. In the literature, ALOHA protocols are analysed by giving necessary and sufficient conditions for the stability of the queues of the node buffers under a control vector (whose elements are the transmission probabilities assigned to the nodes), given an arrival rate vector (whose elements represent the rates of the packets arriving in the node buffers). The innovation of this work is that, given an arrival rate vector, it computes the optimal control vector by defining and solving a stochastic control problem aimed at maximising the overall transmission efficiency, while keeping a grade of fairness among the nodes. Furthermore, a more general case in which the arrival rate vector changes in time is considered. The increased efficiency of the proposed solution with respect to the standard ALOHA approach is evaluated by means of numerical simulations.

  19. Randomized Algorithms for Systems and Control: Theory and Applications

    DTIC Science & Technology

    2008-05-01

    does not display a currently valid OMB control number . 1. REPORT DATE MAY 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008 to 00-00-2008 4...TITLE AND SUBTITLE Randomized Algorithms for Systems and Control: Theory and Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT... NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) IEIIT-CNR

  20. A stochastic basis to the spatially uniform distribution of randomly generated Ionian paterae

    NASA Astrophysics Data System (ADS)

    Shoji, D.; Hussmann, H.

    2016-10-01

    Due to its tidally heated interior, Io is a geologically very active satellite that bears many volcanic features. It is observed that the mean nearest neighbor distance of each volcanic feature, called a patera, is larger than that of a random distribution, which implies that the spatial distribution of paterae is uniform rather than random. However, it is uncertain how the paterae are organized into a uniform distribution. We suggest the mechanism of Io's uniformly distributed paterae considering localized obliteration of old features. Instead of geological modeling, we performed stochastic simulations and statistical analyses for the obliteration of quiescent paterae. Monte Carlo calculations with Gaussian obliteration probability show that if the width of obliteration probability is approximately 80 km and the volcanic generation rate is ˜5.0 × 10-6 km-2 Ma-1, uniform distribution and the observed number density of paterae are attained at the 2σ level on a time scale of approximately 6 Myr. With this generation rate and width of the obliteration probability, the averaged distance of one patera to the nearest patera (mean nearest neighbor distance) is approximately 200 km, which is consistent with the observed value. The uniformity of the distribution is maintained once it is achieved. On regional scales, Io's paterae would naturally evolve from random into uniform distributions by the obliteration of old and quiescent features.

  1. Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.

    1988-01-01

    A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.

  2. A stochastic model for leukocyte random motility and chemotaxis based on receptor binding fluctuations

    PubMed Central

    1988-01-01

    Two central features of polymorphonuclear leukocyte chemosensory movement behavior demand fundamental theoretical understanding. In uniform concentrations of chemoattractant, these cells exhibit a persistent random walk, with a characteristic "persistence time" between significant changes in direction. In chemoattractant concentration gradients, they demonstrate a biased random walk, with an "orientation bias" characterizing the fraction of cells moving up the gradient. A coherent picture of cell movement responses to chemoattractant requires that both the persistence time and the orientation bias be explained within a unifying framework. In this paper, we offer the possibility that "noise" in the cellular signal perception/response mechanism can simultaneously account for these two key phenomena. In particular, we develop a stochastic mathematical model for cell locomotion based on kinetic fluctuations in chemoattractant/receptor binding. This model can simulate cell paths similar to those observed experimentally, under conditions of uniform chemoattractant concentrations as well as chemoattractant concentration gradients. Furthermore, this model can quantitatively predict both cell persistence time and dependence of orientation bias on gradient size. Thus, the concept of signal "noise" can quantitatively unify the major characteristics of leukocyte random motility and chemotaxis. The same level of noise large enough to account for the observed frequency of turning in uniform environments is simultaneously small enough to allow for the observed degree of directional bias in gradients. PMID:3339093

  3. Networked Fusion Filtering from Outputs with Stochastic Uncertainties and Correlated Random Transmission Delays

    PubMed Central

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2016-01-01

    This paper is concerned with the distributed and centralized fusion filtering problems in sensor networked systems with random one-step delays in transmissions. The delays are described by Bernoulli variables correlated at consecutive sampling times, with different characteristics at each sensor. The measured outputs are subject to uncertainties modeled by random parameter matrices, thus providing a unified framework to describe a wide variety of network-induced phenomena; moreover, the additive noises are assumed to be one-step autocorrelated and cross-correlated. Under these conditions, without requiring the knowledge of the signal evolution model, but using only the first and second order moments of the processes involved in the observation model, recursive algorithms for the optimal linear distributed and centralized filters under the least-squares criterion are derived by an innovation approach. Firstly, local estimators based on the measurements received from each sensor are obtained and, after that, the distributed fusion filter is generated as the least-squares matrix-weighted linear combination of the local estimators. Also, a recursive algorithm for the optimal linear centralized filter is proposed. In order to compare the estimators performance, recursive formulas for the error covariance matrices are derived in all the algorithms. The effects of the delays in the filters accuracy are analyzed in a numerical example which also illustrates how some usual network-induced uncertainties can be dealt with using the current observation model described by random matrices. PMID:27338387

  4. A Stochastic Framework For Sediment Concentration Estimation By Accounting Random Arrival Processes Of Incoming Particles Into Receiving Waters

    NASA Astrophysics Data System (ADS)

    Tsai, C.; Hung, R. J.

    2015-12-01

    This study attempts to apply queueing theory to develop a stochastic framework that could account for the random-sized batch arrivals of incoming sediment particles into receiving waters. Sediment particles, control volume, mechanics of sediment transport (such as mechanics of suspension, deposition and resuspension) are treated as the customers, service facility and the server respectively in queueing theory. In the framework, the stochastic diffusion particle tracking model (SD-PTM) and resuspension of particles are included to simulate the random transport trajectories of suspended particles. The most distinguished characteristic of queueing theory is that customers come to the service facility in a random manner. In analogy to sediment transport, this characteristic is adopted to model the random-sized batch arrival process of sediment particles including the random occurrences and random magnitude of incoming sediment particles. The random occurrences of arrivals are simulated by Poisson process while the number of sediment particles in each arrival can be simulated by a binominal distribution. Simulations of random arrivals and random magnitude are proposed individually to compare with the random-sized batch arrival simulations. Simulation results are a probabilistic description for discrete sediment transport through ensemble statistics (i.e. ensemble means and ensemble variances) of sediment concentrations and transport rates. Results reveal the different mechanisms of incoming particles will result in differences in the ensemble variances of concentrations and transport rates under the same mean incoming rate of sediment particles.

  5. Stochastic simulation of reaction subnetworks: Exploiting synergy between the chemical master equation and the Gillespie algorithm

    NASA Astrophysics Data System (ADS)

    Albert, J.

    2016-12-01

    Stochastic simulation of reaction networks is limited by two factors: accuracy and time. The Gillespie algorithm (GA) is a Monte Carlo-type method for constructing probability distribution functions (pdf) from statistical ensembles. Its accuracy is therefore a function of the computing time. The chemical master equation (CME) is a more direct route to obtaining the pdfs, however, solving the CME is generally very difficult for large networks. We propose a method that combines both approaches in order to simulate stochastically a part of a network. The network is first divided into two parts: A and B. Part A is simulated using the GA, while the solution of the CME for part B, with initial conditions imposed by simulation results of part A, is fed back into the GA. This cycle is then repeated a desired number of times. The advantage of this synergy between the two approaches is: 1) the GA needs to simulate only a part of the whole network, and hence is faster, and 2) the CME is necessarily simpler to solve, as the part of the network it describes is smaller. We will demonstrate on two examples - a positive feedback (genetic switch) and oscillations driven by a negative feedback - the utility of this approach.

  6. Selecting Random Distributed Elements for HIFU using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yufeng

    2011-09-01

    As an effective and noninvasive therapeutic modality for tumor treatment, high-intensity focused ultrasound (HIFU) has attracted attention from both physicians and patients. New generations of HIFU systems with the ability to electrically steer the HIFU focus using phased array transducers have been under development. The presence of side and grating lobes may cause undesired thermal accumulation at the interface of the coupling medium (i.e. water) and skin, or in the intervening tissue. Although sparse randomly distributed piston elements could reduce the amplitude of grating lobes, there are theoretically no grating lobes with the use of concave elements in the new phased array HIFU. A new HIFU transmission strategy is proposed in this study, firing a number of but not all elements for a certain period and then changing to another group for the next firing sequence. The advantages are: 1) the asymmetric position of active elements may reduce the side lobes, and 2) each element has some resting time during the entire HIFU ablation (up to several hours for some clinical applications) so that the decreasing efficiency of the transducer due to thermal accumulation is minimized. Genetic algorithm was used for selecting randomly distributed elements in a HIFU array. Amplitudes of the first side lobes at the focal plane were used as the fitness value in the optimization. Overall, it is suggested that the proposed new strategy could reduce the side lobe and the consequent side-effects, and the genetic algorithm is effective in selecting those randomly distributed elements in a HIFU array.

  7. A new stochastic algorithm for proton exchange membrane fuel cell stack design optimization

    NASA Astrophysics Data System (ADS)

    Chakraborty, Uttara

    2012-10-01

    This paper develops a new stochastic heuristic for proton exchange membrane fuel cell stack design optimization. The problem involves finding the optimal size and configuration of stand-alone, fuel-cell-based power supply systems: the stack is to be configured so that it delivers the maximum power output at the load's operating voltage. The problem apparently looks straightforward but is analytically intractable and computationally hard. No exact solution can be found, nor is it easy to find the exact number of local optima; we, therefore, are forced to settle with approximate or near-optimal solutions. This real-world problem, first reported in Journal of Power Sources 131, poses both engineering challenges and computational challenges and is representative of many of today's open problems in fuel cell design involving a mix of discrete and continuous parameters. The new algorithm is compared against genetic algorithm, simulated annealing, and (1+1)-EA. Statistical tests of significance show that the results produced by our method are better than the best-known solutions for this problem published in the literature. A finite Markov chain analysis of the new algorithm establishes an upper bound on the expected time to find the optimum solution.

  8. A new algorithm for calculating the curvature perturbations in stochastic inflation

    SciTech Connect

    Fujita, Tomohiro; Kawasaki, Masahiro; Tada, Yuichiro; Takesako, Tomohiro E-mail: kawasaki@icrr.u-tokyo.ac.jp E-mail: takesako@icrr.u-tokyo.ac.jp

    2013-12-01

    We propose a new approach for calculating the curvature perturbations produced during inflation in the stochastic formalism. In our formalism, the fluctuations of the e-foldings are directly calculated without perturbatively expanding the inflaton field and they are connected to the curvature perturbations by the δN formalism. The result automatically includes the contributions of the higher order perturbations because we solve the equation of motion non-perturbatively. In this paper, we analytically prove that our result (the power spectrum and the nonlinearity parameter) is consistent with the standard result in single field slow-roll inflation. We also describe the algorithm for numerical calculations of the curvature perturbations in more general inflation models.

  9. Stochastic characterization of phase detection algorithms in phase-shifting interferometry

    SciTech Connect

    Munteanu, Florin

    2016-11-01

    Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here, we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.

  10. Stochastic characterization of phase detection algorithms in phase-shifting interferometry

    DOE PAGES

    Munteanu, Florin

    2016-11-01

    Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here,more » we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.« less

  11. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    PubMed

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  12. Stochastic generation of explicit pore structures by thresholding Gaussian random fields

    SciTech Connect

    Hyman, Jeffrey D.; Winter, C. Larrabee

    2014-11-15

    We provide a description and computational investigation of an efficient method to stochastically generate realistic pore structures. Smolarkiewicz and Winter introduced this specific method in pores resolving simulation of Darcy flows (Smolarkiewicz and Winter, 2010 [1]) without giving a complete formal description or analysis of the method, or indicating how to control the parameterization of the ensemble. We address both issues in this paper. The method consists of two steps. First, a realization of a correlated Gaussian field, or topography, is produced by convolving a prescribed kernel with an initial field of independent, identically distributed random variables. The intrinsic length scales of the kernel determine the correlation structure of the topography. Next, a sample pore space is generated by applying a level threshold to the Gaussian field realization: points are assigned to the void phase or the solid phase depending on whether the topography over them is above or below the threshold. Hence, the topology and geometry of the pore space depend on the form of the kernel and the level threshold. Manipulating these two user prescribed quantities allows good control of pore space observables, in particular the Minkowski functionals. Extensions of the method to generate media with multiple pore structures and preferential flow directions are also discussed. To demonstrate its usefulness, the method is used to generate a pore space with physical and hydrological properties similar to a sample of Berea sandstone. -- Graphical abstract: -- Highlights: •An efficient method to stochastically generate realistic pore structures is provided. •Samples are generated by applying a level threshold to a Gaussian field realization. •Two user prescribed quantities determine the topology and geometry of the pore space. •Multiple pore structures and preferential flow directions can be produced. •A pore space based on Berea sandstone is generated.

  13. Stochastic chemical kinetics and the total quasi-steady-state assumption: application to the stochastic simulation algorithm and chemical master equation.

    PubMed

    Macnamara, Shev; Bersani, Alberto M; Burrage, Kevin; Sidje, Roger B

    2008-09-07

    Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis-Menten enzyme kinetics, double phosphorylation, the Goldbeter-Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.

  14. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    NASA Astrophysics Data System (ADS)

    Jin, Shi; Lu, Hanqing

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro-macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (in the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.

  15. Random matrix approach to quantum adiabatic evolution algorithms

    SciTech Connect

    Boulatov, A.; Smelyanskiy, V.N.

    2005-05-15

    We analyze the power of the quantum adiabatic evolution algorithm (QAA) for solving random computationally hard optimization problems within a theoretical framework based on random matrix theory (RMT). We present two types of driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that nonadiabatic corrections in the QAA are due to the interaction of the ground state with the 'cloud' formed by most of the excited states, confirming that in driven RMT models, the Landau-Zener scenario of pairwise level repulsions is not relevant for the description of nonadiabatic corrections. We show that the QAA has a finite probability of success in a certain range of parameters, implying a polynomial complexity of the algorithm. The second model corresponds to the standard QAA with the problem Hamiltonian taken from the RMT Gaussian unitary ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. For this reason, the driven GUE model can also lead to polynomial complexity of the QAA. The main contribution to the failure probability of the QAA comes from the nonadiabatic corrections to the eigenstates, which only depend on the absolute values of the transition amplitudes. Due to the mapping between the two models, these absolute values are the same in both cases. Our results indicate that this 'phase irrelevance' is the leading effect that can make both the Markovian- and GUE-type QAAs successful.

  16. Stochastic resonance whole-body vibration improves postural control in health care professionals: a worksite randomized controlled trial.

    PubMed

    Elfering, Achim; Schade, Volker; Stoecklin, Lukas; Baur, Simone; Burger, Christian; Radlinger, Lorenz

    2014-05-01

    Slip, trip, and fall injuries are frequent among health care workers. Stochastic resonance whole-body vibration training was tested to improve postural control. Participants included 124 employees of a Swiss university hospital. The randomized controlled trial included an experimental group given 8 weeks of training and a control group with no intervention. In both groups, postural control was assessed as mediolateral sway on a force plate before and after the 8-week trial. Mediolateral sway was significantly decreased by stochastic resonance whole-body vibration training in the experimental group but not in the control group that received no training (p < .05). Stochastic resonance whole-body vibration training is an option in the primary prevention of balance-related injury at work.

  17. A stochastic simulation method for the assessment of resistive random access memory retention reliability

    SciTech Connect

    Berco, Dan Tseng, Tseung-Yuen

    2015-12-21

    This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.

  18. Dynamics of Random Boolean Networks under Fully Asynchronous Stochastic Update Based on Linear Representation

    PubMed Central

    Luo, Chao; Wang, Xingyuan

    2013-01-01

    A novel algebraic approach is proposed to study dynamics of asynchronous random Boolean networks where a random number of nodes can be updated at each time step (ARBNs). In this article, the logical equations of ARBNs are converted into the discrete-time linear representation and dynamical behaviors of systems are investigated. We provide a general formula of network transition matrices of ARBNs as well as a necessary and sufficient algebraic criterion to determine whether a group of given states compose an attractor of length in ARBNs. Consequently, algorithms are achieved to find all of the attractors and basins in ARBNs. Examples are showed to demonstrate the feasibility of the proposed scheme. PMID:23785502

  19. A matrix product algorithm for stochastic dynamics on locally tree-like graphs

    NASA Astrophysics Data System (ADS)

    Barthel, Thomas; de Bacco, Caterina; Franz, Silvio

    In this talk, I describe a novel algorithm for the efficient simulation of generic stochastic dynamics of classical degrees of freedom defined on the vertices of locally tree-like graphs. Such models correspond for example to spin-glass systems, Boolean networks, neural networks, or other technological, biological, and social networks. Building upon the cavity method and ideas from quantum many-body theory, the algorithm is based on a matrix product approximation of the so-called edge messages - conditional probabilities of vertex variable trajectories. The matrix product edge messages (MPEM) are constructed recursively. Computation costs and accuracy can be tuned by controlling the matrix dimensions of the MPEM in truncations. In contrast to Monte Carlo simulations, the approach has a better error scaling and works for both, single instances as well as the thermodynamic limit. Due to the absence of cancellation effects, observables with small expectation values can be evaluated accurately, allowing for the study of decay processes and temporal correlations with unprecedented accuracy. The method is demonstrated for the prototypical non-equilibrium Glauber dynamics of an Ising spin system. Reference: arXiv:1508.03295.

  20. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    PubMed

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  1. MULTILEVEL ACCELERATION OF STOCHASTIC COLLOCATION METHODS FOR PDE WITH RANDOM INPUT DATA

    SciTech Connect

    Webster, Clayton G; Jantsch, Peter A; Teckentrup, Aretha L; Gunzburger, Max D

    2013-01-01

    Stochastic Collocation (SC) methods for stochastic partial differential equa- tions (SPDEs) suffer from the curse of dimensionality, whereby increases in the stochastic dimension cause an explosion of computational effort. To combat these challenges, multilevel approximation methods seek to decrease computational complexity by balancing spatial and stochastic discretization errors. As a form of variance reduction, multilevel techniques have been successfully applied to Monte Carlo (MC) methods, but may be extended to accelerate other methods for SPDEs in which the stochastic and spatial degrees of freedom are de- coupled. This article presents general convergence and computational complexity analysis of a multilevel method for SPDEs, demonstrating its advantages with regard to standard, single level approximation. The numerical results will highlight conditions under which multilevel sparse grid SC is preferable to the more traditional MC and SC approaches.

  2. A stochastic mechanism for signal propagation in the brain: Force of rapid random fluctuations in membrane potentials of individual neurons.

    PubMed

    Hong, Dawei; Man, Shushuang; Martin, Joseph V

    2016-01-21

    There are two functionally important factors in signal propagation in a brain structural network: the very first synaptic delay-a time delay about 1ms-from the moment when signals originate to the moment when observation on the signal propagation can begin; and rapid random fluctuations in membrane potentials of every individual neuron in the network at a timescale of microseconds. We provide a stochastic analysis of signal propagation in a general setting. The analysis shows that the two factors together result in a stochastic mechanism for the signal propagation as described below. A brain structural network is not a rigid circuit rather a very flexible framework that guides signals to propagate but does not guarantee success of the signal propagation. In such a framework, with the very first synaptic delay, rapid random fluctuations in every individual neuron in the network cause an "alter-and-concentrate effect" that almost surely forces signals to successfully propagate. By the stochastic mechanism we provide analytic evidence for the existence of a force behind signal propagation in a brain structural network caused by rapid random fluctuations in every individual neuron in the network at a timescale of microseconds with a time delay of 1ms.

  3. A Hybrid of the Chemical Master Equation and the Gillespie Algorithm for Efficient Stochastic Simulations of Sub-Networks.

    PubMed

    Albert, Jaroslav

    2016-01-01

    Modeling stochastic behavior of chemical reaction networks is an important endeavor in many aspects of chemistry and systems biology. The chemical master equation (CME) and the Gillespie algorithm (GA) are the two most fundamental approaches to such modeling; however, each of them has its own limitations: the GA may require long computing times, while the CME may demand unrealistic memory storage capacity. We propose a method that combines the CME and the GA that allows one to simulate stochastically a part of a reaction network. First, a reaction network is divided into two parts. The first part is simulated via the GA, while the solution of the CME for the second part is fed into the GA in order to update its propensities. The advantage of this method is that it avoids the need to solve the CME or stochastically simulate the entire network, which makes it highly efficient. One of its drawbacks, however, is that most of the information about the second part of the network is lost in the process. Therefore, this method is most useful when only partial information about a reaction network is needed. We tested this method against the GA on two systems of interest in biology--the gene switch and the Griffith model of a genetic oscillator--and have shown it to be highly accurate. Comparing this method to four different stochastic algorithms revealed it to be at least an order of magnitude faster than the fastest among them.

  4. Model-based analyses of bioequivalence crossover trials using the stochastic approximation expectation maximisation algorithm.

    PubMed

    Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France

    2011-09-20

    In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.

  5. A stochastic simulation framework for the prediction of strategic noise mapping and occupational noise exposure using the random walk approach.

    PubMed

    Han, Lim Ming; Haron, Zaiton; Yahya, Khairulzan; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri

    2015-01-01

    Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces.

  6. The role of random fluctuations in the magnetosphere-ionosphere system: a dynamic stochastic model for AE-index variations

    NASA Astrophysics Data System (ADS)

    Pulkkinen, A.; Klimas, A.; Vassiliadis, D.; Uritsky, V.

    2005-12-01

    Understanding the evolution of bursts of activity in the magnetosphere-ionosphere system has been one of the central challenges in space physics since, and even prior to the introduction of the term "substorm". An extensive amount of work has been put to the characterization of the average near-space plasma environment behavior during substorms and several more or less deterministic models have been introduced to explain the observations. However, although most of substorms seem to have some common characteristics (otherwise any classification would be completely meaningless), like intensification of auroral electric currents, dipolarization of the magnetotail and injections of plasma sheet charged particles, each substorm has its distinct features in terms of strong fluctuations around the average "typical" behavior. This highly complex nature of individual substorms suggests that stochastic processes may play a role, even a central one in the evolution of substorms. In this work, we develop a simple stochastic model for the AE-index variations to investigate the role of random fluctuations in the substorm phenomenon. We show that by the introduction of a stochastic component, we are able to capture some fundamental features of the AE-index variations. More specifically, complex variations associated with individual bursts are a central part of the model. It will be demonstrated that by analyzing the structure of the constructed stochastic model some presently open questions about substorm-related bursts of the AE-index can be addressed quantitatively. First and foremost, it will be shown that the stochastic fluctuations are a fundamental part of the AE-index evolution and cannot be neglected even when the average properties of the index are of interest.

  7. Diffusion and stochastic island generation in the magnetic field line random walk

    SciTech Connect

    Vlad, M.; Spineanu, F.

    2014-08-10

    The cross-field diffusion of field lines in stochastic magnetic fields described by the 2D+slab model is studied using a semi-analytic statistical approach, the decorrelation trajectory method. We show that field line trapping and the associated stochastic magnetic islands strongly influence the diffusion coefficients, leading to dependences on the parameters that are different from the quasilinear and Bohm regimes. A strong amplification of the diffusion is produced by a small slab field in the presence of trapping. The diffusion regimes are determined and the corresponding physical processes are identified.

  8. Trace determination of carbendazim and thiabendazole in drinking water by liquid chromatography and using linear modulated stochastic resonance algorithm.

    PubMed

    Deng, Haishan; Xiang, Bingren; Xie, Shaofei; Zhou, Xiaohua

    2007-01-01

    The following paper addresses an attempt to determine the trace levels of two benzimidazole fungicides (carbendazim, CAS 10605-21-7 and thiabendazole, CAS 148-79-8) in drinking water samples using the newly proposed linear modulated stochastic resonance algorithm. In order to implement an adaptive and intelligent algorithm, a two-step optimization procedure was developed for the parameter selection to give attention to both the signal-to-noise ratio and the peak shape of output signal. How to limit the ranges of the parameters to be searched was discussed in detail. The limits of detection for carbendazim and thiabendazole were improved to 0.012 microg x L(-1) and 0.015 microg x L(-1), respectively. The successful application demonstrated the ability of the algorithm for detecting two or more weak chromatographic peaks simultaneously.

  9. A novel quantum random number generation algorithm used by smartphone camera

    NASA Astrophysics Data System (ADS)

    Wu, Nan; Wang, Kun; Hu, Haixing; Song, Fangmin; Li, Xiangdong

    2015-05-01

    We study an efficient algorithm to extract quantum random numbers (QRN) from the raw data obtained by charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) based sensors, like a camera used in a commercial smartphone. Based on NIST statistical test for random number generators, the proposed algorithm has a high QRN generation rate and high statistical randomness. This algorithm provides a kind of simple, low-priced and reliable devices as a QRN generator for quantum key distribution (QKD) or other cryptographic applications.

  10. Stochastic nonlinear wave equation with memory driven by compensated Poisson random measures

    SciTech Connect

    Liang, Fei; Gao, Hongjun

    2014-03-15

    In this paper, we study a class of stochastic nonlinear wave equation with memory driven by Lévy noise. We first show the existence and uniqueness of global mild solutions using a suitable energy function. Second, under some additional assumptions we prove the exponential stability of the solutions.

  11. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  12. A stochastic thermostat algorithm for coarse-grained thermomechanical modeling of large-scale soft matters: Theory and application to microfilaments

    SciTech Connect

    Li, Tong; Gu, YuanTong

    2014-04-15

    As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.

  13. An improved label propagation algorithm based on the similarity matrix using random walk

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-Kun; Song, Chen; Jia, Jia; Lu, Zeng-Lei; Zhang, Qian

    2016-05-01

    Community detection based on label propagation algorithm (LPA) has attracted widespread concern because of its high efficiency. But it is difficult to guarantee the accuracy of community detection as the label spreading is random in the algorithm. In response to the problem, an improved LPA based on random walk (RWLPA) is proposed in this paper. Firstly, a matrix measuring similarity among various nodes in the network is obtained through calculation. Secondly, during the process of label propagation, when a node has more than a neighbor label with the highest frequency, not the label of a random neighbor but the label of the neighbor with the highest similarity will be chosen to update. It can avoid label propagating randomly among communities. Finally, we test LPA and the improved LPA in benchmark networks and real-world networks. The results show that the quality of communities discovered by the improved algorithm is improved compared with the traditional algorithm.

  14. Exact Mapping of the Stochastic Field Theory for Manna Sandpiles to Interfaces in Random Media

    NASA Astrophysics Data System (ADS)

    Le Doussal, Pierre; Wiese, Kay Jörg

    2015-03-01

    We show that the stochastic field theory for directed percolation in the presence of an additional conservation law [the conserved directed-percolation (C-DP) class] can be mapped exactly to the continuum theory for the depinning of an elastic interface in short-range correlated quenched disorder. Along one line of the parameters commonly studied, this mapping leads to the simplest overdamped dynamics. Away from this line, an additional memory term arises in the interface dynamics; we argue that this does not change the universality class. Since C-DP is believed to describe the Manna class of self-organized criticality, this shows that Manna stochastic sandpiles and disordered elastic interfaces (i.e., the quenched Edwards-Wilkinson model) share the same universal large-scale behavior.

  15. A Randomized Gossip Consenus Algorithm on Convex Metric Spaces

    DTIC Science & Technology

    2012-01-01

    655–661, May 2005. [20] A. Tahbaz Salehi and A. Jadbabaie. Necessary and sufficient conditions for consensus over random networks. IEEE Trans. Autom...Control, 53(3):791–795, Apr 2008. [21] A. Tahbaz Salehi and A. Jadbabaie. Consensus over ergodic stationary graph processes. IEEE Trans. Autom

  16. Introducing Stochastic Simulation of Chemical Reactions Using the Gillespie Algorithm and MATLAB: Revisited and Augmented

    ERIC Educational Resources Information Center

    Argoti, A.; Fan, L. T.; Cruz, J.; Chou, S. T.

    2008-01-01

    The stochastic simulation of chemical reactions, specifically, a simple reversible chemical reaction obeying the first-order, i.e., linear, rate law, has been presented by Martinez-Urreaga and his collaborators in this journal. The current contribution is intended to complement and augment their work in two aspects. First, the simple reversible…

  17. A Randomized Approximate Nearest Neighbors Algorithm - A Short Version

    DTIC Science & Technology

    2011-01-13

    20172. [8] D. Knuth (1969) in Seminumerical Algorithms, vol. 2 of The Art of Computer Pro- gramming, Reading, Mass: Addison-Wesley. [9] N. Ailon, E...ORGANIZATION NAME(S) AND ADDRESS(ES) Yale University,Department of Computer Science,New Haven,CT,06520 8. PERFORMING ORGANIZATION REPORT NUMBER 9...points is the standard Euclidean distance. For each xi, one can compute in a straightforward manner the distances to the rest of the points and thus

  18. Effective algorithm for random mask generation used in secured optical data encryption and communication

    NASA Astrophysics Data System (ADS)

    Liu, Yuexin; Metzner, John J.; Guo, Ruyan; Yu, Francis T. S.

    2005-09-01

    An efficient and secure algorithm for random phase mask generation used in optical data encryption and transmission system is proposed, based on Diffie-Hellman public key distribution. Thus-generated random mask has higher security due to the fact that it is never exposed to the vulnerable transmitting channels. The effectiveness to retrieve the original image and its robustness against blind manipulation have been demonstrated by our numerical results. In addition, this algorithm can be easily extended to multicast networking system and refresh of this shared random key is also very simple to implement.

  19. The Application of Imperialist Competitive Algorithm for Fuzzy Random Portfolio Selection Problem

    NASA Astrophysics Data System (ADS)

    EhsanHesamSadati, Mir; Bagherzadeh Mohasefi, Jamshid

    2013-10-01

    This paper presents an implementation of the Imperialist Competitive Algorithm (ICA) for solving the fuzzy random portfolio selection problem where the asset returns are represented by fuzzy random variables. Portfolio Optimization is an important research field in modern finance. By using the necessity-based model, fuzzy random variables reformulate to the linear programming and ICA will be designed to find the optimum solution. To show the efficiency of the proposed method, a numerical example illustrates the whole idea on implementation of ICA for fuzzy random portfolio selection problem.

  20. An accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems using gradient-based diffusion and tau-leaping

    PubMed Central

    Koh, Wonryull; Blackwell, Kim T.

    2011-01-01

    Stochastic simulation of reaction–diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction–diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies. PMID:21513371

  1. Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure

    PubMed Central

    Park, Wookje; Jung, Sikhang

    2014-01-01

    Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508

  2. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  3. A random fatigue of mechanize titanium abutment studied with Markoff chain and stochastic finite element formulation.

    PubMed

    Prados-Privado, María; Prados-Frutos, Juan Carlos; Calvo-Guirado, José Luis; Bea, José Antonio

    2016-11-01

    To measure fatigue in dental implants and in its components, it is necessary to use a probabilistic analysis since the randomness in the output depends on a number of parameters (such as fatigue properties of titanium and applied loads, unknown beforehand as they depend on mastication habits). The purpose is to apply a probabilistic approximation in order to predict fatigue life, taking into account the randomness of variables. More accuracy on the results has been obtained by taking into account different load blocks with different amplitudes, as happens with bite forces during the day and allowing us to know how effects have different type of bruxism on the piece analysed.

  4. Solving the chemical master equation by a fast adaptive finite state projection based on the stochastic simulation algorithm.

    PubMed

    Sidje, R B; Vo, H D

    2015-11-01

    The mathematical framework of the chemical master equation (CME) uses a Markov chain to model the biochemical reactions that are taking place within a biological cell. Computing the transient probability distribution of this Markov chain allows us to track the composition of molecules inside the cell over time, with important practical applications in a number of areas such as molecular biology or medicine. However the CME is typically difficult to solve, since the state space involved can be very large or even countably infinite. We present a novel way of using the stochastic simulation algorithm (SSA) to reduce the size of the finite state projection (FSP) method. Numerical experiments that demonstrate the effectiveness of the reduction are included.

  5. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  6. Stochastic Simulation Techniques for Partition Function Approximation of Gibbs Random Field Images

    DTIC Science & Technology

    1991-06-01

    of Physics C : Solid State Physics , vol. 10, pp. 1379-1388, 1977. [10] F.S. Cohen, "Markov random fields for image modeling and analysis." In Modeling...disorder," Journal of Applied Crystallography, vol. 6, pp. 87-96, 1973. [9] I.G. Enting, "Crystal growth models and Ising models: Disorder points," Journal

  7. Eigenvalue density of linear stochastic dynamical systems: A random matrix approach

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Pastur, L.; Lytova, A.; Du Bois, J.

    2012-02-01

    Eigenvalue problems play an important role in the dynamic analysis of engineering systems modeled using the theory of linear structural mechanics. When uncertainties are considered, the eigenvalue problem becomes a random eigenvalue problem. In this paper the density of the eigenvalues of a discretized continuous system with uncertainty is discussed by considering the model where the system matrices are the Wishart random matrices. An analytical expression involving the Stieltjes transform is derived for the density of the eigenvalues when the dimension of the corresponding random matrix becomes asymptotically large. The mean matrices and the dispersion parameters associated with the mass and stiffness matrices are necessary to obtain the density of the eigenvalues in the frameworks of the proposed approach. The applicability of a simple eigenvalue density function, known as the Marenko-Pastur (MP) density, is investigated. The analytical results are demonstrated by numerical examples involving a plate and the tail boom of a helicopter with uncertain properties. The new results are validated using an experiment on a vibrating plate with randomly attached spring-mass oscillators where 100 nominally identical samples are physically created and individually tested within a laboratory framework.

  8. Multisite updating Markov chain Monte Carlo algorithm for morphologically constrained Gibbs random fields

    NASA Astrophysics Data System (ADS)

    Sivakumar, Krishnamoorthy; Goutsias, John I.

    1998-09-01

    We study the problem of simulating a class of Gibbs random field models, called morphologically constrained Gibbs random fields, using Markov chain Monte Carlo sampling techniques. Traditional single site updating Markov chain Monte Carlo sampling algorithm, like the Metropolis algorithm, tend to converge extremely slowly when used to simulate these models, particularly at low temperatures and for constraints involving large geometrical shapes. Moreover, the morphologically constrained Gibbs random fields are not, in general, Markov. Hence, a Markov chain Monte Carlo sampling algorithm based on the Gibbs sampler is not possible. We prose a variant of the Metropolis algorithm that, at each iteration, allows multi-site updating and converges substantially faster than the traditional single- site updating algorithm. The set of sites that are updated at a particular iteration is specified in terms of a shape parameter and a size parameter. Computation of the acceptance probability involves a 'test ratio,' which requires computation of the ratio of the probabilities of the current and new realizations. Because of the special structure of our energy function, this computation can be done by means of a simple; local iterative procedure. Therefore lack of Markovianity does not impose any additional computational burden for model simulation. The proposed algorithm has been used to simulate a number of image texture models, both synthetic and natural.

  9. Stochastic analysis of the lateral-torsional buckling resistance of steel beams with random imperfections

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2013-10-01

    The paper deals with the statistical analysis of resistance of a hot-rolled steel IPE beam under major axis bending. The lateral-torsional buckling stability problem of imperfect beam is described. The influence of bending moments and warping torsion on the ultimate limit state of the IPE beam with random imperfections is analyzed. The resistance is calculated by means of the close form solution. The initial geometrical imperfections of the beam are considered as the formatively identical to the first eigen mode of buckling. Changes of mean values of the resistance, of mean values of internal bending moments, of the variance of resistance and of the variance of internal bending moments were studied in dependence on the beam non-dimensional slenderness. The values of non-dimensional slenderness for which the statistical characteristics of internal moments associated with random resistance are maximal were determined.

  10. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    NASA Astrophysics Data System (ADS)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21

  11. Extinction transition in stochastic population dynamics in a random, convective environment

    NASA Astrophysics Data System (ADS)

    Juhász, Róbert

    2013-10-01

    Motivated by modeling the dynamics of a population living in a flowing medium where the environmental factors are random in space, we have studied an asymmetric variant of the one-dimensional contact process, where the quenched random reproduction rates are systematically greater in one direction than in the opposite one. The spatial disorder turns out to be a relevant perturbation but, according to results of Monte Carlo simulations, the behavior of the model at the extinction transition is different from the (infinite-randomness) critical behavior of the disordered symmetric contact process. Depending on the strength a of the asymmetry, the critical population drifts either with a finite velocity or with an asymptotically vanishing velocity as x(t) ∼ tμ(a), where μ(a) < 1. Dynamical quantities are non-self-averaging at the extinction transition; the survival probability, for instance, shows multiscaling, i.e. it is characterized by a broad spectrum of effective exponents. For a sufficiently weak asymmetry, a Griffiths phase appears below the extinction transition, where the survival probability decays as a non-universal power of the time while, above the transition, another extended phase emerges, where the front of the population advances anomalously with a diffusion exponent continuously varying with the control parameter.

  12. Vaccine enhanced extinction in stochastic epidemic models

    NASA Astrophysics Data System (ADS)

    Billings, Lora; Mier-Y-Teran, Luis; Schwartz, Ira

    2012-02-01

    We address the problem of developing new and improved stochastic control methods that enhance extinction in disease models. In finite populations, extinction occurs when fluctuations owing to random transitions act as an effective force that drives one or more components or species to vanish. Using large deviation theory, we identify the location of the optimal path to extinction in epidemic models with stochastic vaccine controls. These models not only capture internal noise from random transitions, but also external fluctuations, such as stochastic vaccination scheduling. We quantify the effectiveness of the randomly applied vaccine over all possible distributions by using the location of the optimal path, and we identify the most efficient control algorithms. We also discuss how mean extinction times scale with epidemiological and social parameters.

  13. Bad News Comes in Threes: Stochastic Structure in Random Events (Invited)

    NASA Astrophysics Data System (ADS)

    Newman, W. I.; Turcotte, D. L.; Malamud, B. D.

    2013-12-01

    Plots of random numbers have been known for nearly a century to show repetitive peak-to-peak sequences with an average length of 3. Geophysical examples include events such as earthquakes, geyser eruptions, and magnetic substorms. We consider a classic model in statistical physics, the Langevin equation x[n+1] = α*x[n] + η[n], where x[n] is the nth value of a measured quantity and η[n] is a random number, commonly a Gaussian white noise. Here, α is a parameter that ranges from 0, corresponding to independent random data, to 1, corresponding to Brownian motion which preserves memory of past steps. We show that, for α = 0, the mean peak-to-peak sequence length is 3 while, for α = 1, the mean sequence length is 4. We obtain the physical and mathematical properties of this model, including the distribution of peak-to-peak sequence lengths that can be expected. We compare the theory with observations of earthquake magnitudes emerging from large events, observations of the auroral electrojet index as a measure of global electrojet activity, and time intervals observed between successive eruptions of Old Faithful Geyser in Yellowstone National Park. We demonstrate that the largest earthquake events as described by their magnitudes are consistent with our theory for α = 0, thereby confronting the aphorism (and our analytic theory) that "bad news comes in threes." Electrojet activity, on the other hand, demonstrates some memory effects, consistent with the intuitive picture of the magnetosphere presenting a capacitor-plate like system that preserves memory. Old Faithful Geyser, finally, shows strong antipersistence effects between successive events, i.e. long-time intervals are followed by short ones, and vice versa. As an additional application, we apply our theory to the observed 3-4 year mammalian population cycles.

  14. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    SciTech Connect

    Atanassov, E.; Dimitrov, D. E-mail: emanouil@parallel.bas.bg Gurov, T.

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  15. Stochastic models and numerical algorithms for a class of regulatory gene networks.

    PubMed

    Fournier, Thomas; Gabriel, Jean-Pierre; Pasquier, Jerôme; Mazza, Christian; Galbete, José; Mermod, Nicolas

    2009-08-01

    Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.

  16. A stochastic study of electron transfer kinetics in nano-particulate photocatalysis: a comparison of the quasi-equilibrium approximation with a random walking model.

    PubMed

    Liu, Baoshun; Zhao, Xiujian; Yu, Jiaguo; Fujishima, Akira; Nakata, Kazuya

    2016-11-23

    In the photocatalysis of porous nano-crystalline materials, the transfer of electrons to O2 plays an important role, which includes the electron transport to photocatalytic active centers and successive interfacial transfer to O2. The slowest of them will determine the overall speed of electron transfer in the photocatalysis reaction. Considering the photocatalysis of porous nano-crystalline TiO2 as an example, although some experimental results have shown that the electron kinetics are limited by the interfacial transfer, we still lack the depth of understanding the microscopic mechanism from a theoretical viewpoint. In the present research, a stochastic quasi-equilibrium (QE) theoretical model and a stochastic random walking (RW) model were established to discuss the electron transport and electron interfacial transfer by taking the electron multi-trapping transport and electron interfacial transfer from the photocatalytic active centers to O2 into consideration. By carefully investigating the effect of the electron Fermi level (EF) and the photocatalytic center number on electron transport, we showed that the time taken for an electron to transport to a photocatalytic center predicated by the stochastic RW model was much lower than that predicted by the stochastic QE model, indicating that the electrons cannot reach a QE state during their transport to photocatalytic centers. The stochastic QE model predicted that the electron kinetics of a real photocatalysis for porous nano-crystalline TiO2 should be limited by electron transport, whereas the stochastic RW model showed that the electron kinetics of a real photocatalysis can be limited by the interfacial transfer. Our simulation results show that the stochastic RW model was more in line with the real electron kinetics that have been observed in experiments, therefore it is concluded that the photoinduced electrons cannot reach a QE state before transferring to O2.

  17. Extension and field application of an integrated DNAPL source identification algorithm that utilizes stochastic modeling and a Kalman filter

    NASA Astrophysics Data System (ADS)

    Dokou, Zoi; Pinder, George F.

    2011-02-01

    SummaryThe design of an effective groundwater remediation system involves the determination of the source zone characteristics and subsequent source zone removal. The work presented in this paper focuses on the three-dimensional extension and field application of a previously described source zone identification and delineation algorithm. The three-dimensional search algorithm defines how to achieve an acceptable level of accuracy regarding the strength, geographic location and depth of a dense non-aqueous phase liquid (DNAPL) source while using the least possible number of water quality samples. Target locations and depths of potential sources are identified and given initial importance measures or weights using a technique that exploits expert knowledge. The weights reflect the expert's confidence that the particular source location is the correct one and they are updated as the investigation proceeds. The overall strategy uses stochastic groundwater flow and transport modeling assuming that hydraulic conductivity is known with uncertainty (Monte Carlo approach). Optimal water quality samples are selected according to the degree to which they contribute to the total concentration uncertainty reduction across all model layers and the proximity of the samples to the potential source locations. After a sample is taken, the contaminant concentration plume is updated using a Kalman filter. The set of optimal source strengths is determined using linear programming by minimizing the sum of the absolute differences between modeled and measured concentration values at sampling locations. The Monte Carlo generated suite of plumes emanating from each individual source is calculated and compared with the updated plume. The scores obtained from this comparison serve to update the weights initially assigned by the expert, and the above steps are repeated until the optimal source characteristics are determined. The algorithm's effectiveness is demonstrated by performing a

  18. Muscle forces during running predicted by gradient-based and random search static optimisation algorithms.

    PubMed

    Miller, Ross H; Gillette, Jason C; Derrick, Timothy R; Caldwell, Graham E

    2009-04-01

    Muscle forces during locomotion are often predicted using static optimisation and SQP. SQP has been criticised for over-estimating force magnitudes and under-estimating co-contraction. These problems may be related to SQP's difficulty in locating the global minimum to complex optimisation problems. Algorithms designed to locate the global minimum may be useful in addressing these problems. Muscle forces for 18 flexors and extensors of the lower extremity were predicted for 10 subjects during the stance phase of running. Static optimisation using SQP and two random search (RS) algorithms (a genetic algorithm and simulated annealing) estimated muscle forces by minimising the sum of cubed muscle stresses. The RS algorithms predicted smaller peak forces (42% smaller on average) and smaller muscle impulses (46% smaller on average) than SQP, and located solutions with smaller cost function scores. Results suggest that RS may be a more effective tool than SQP for minimising the sum of cubed muscle stresses in static optimisation.

  19. Biased Random-Key Genetic Algorithms for the Winner Determination Problem in Combinatorial Auctions.

    PubMed

    de Andrade, Carlos Eduardo; Toso, Rodrigo Franco; Resende, Mauricio G C; Miyazawa, Flávio Keidi

    2015-01-01

    In this paper we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.

  20. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  1. Extending OPNET Modeler with External Pseudo Random Number Generators and Statistical Evaluation by the Limited Relative Error Algorithm

    NASA Astrophysics Data System (ADS)

    Becker, Markus; Weerawardane, Thushara Lanka; Li, Xi; Görg, Carmelita

    Pseudo Random Number Generators (PRNG) are the base for stochastic simulations. The usage of good generators is essential for valid simulation results. OPNET Modeler a well-known tool for simulation of communication networks provides a Pseudo Random Number Generator. The extension of OPNET Modeler with external generators and additional statistical evaluation methods that has been performed for this paper increases the flexibility and options in the simulation studies performed.

  2. Community Detection Algorithm Combining Stochastic Block Model and Attribute Data Clustering

    NASA Astrophysics Data System (ADS)

    Kataoka, Shun; Kobayashi, Takuto; Yasuda, Muneki; Tanaka, Kazuyuki

    2016-11-01

    We propose a new algorithm to detect the community structure in a network that utilizes both the network structure and vertex attribute data. Suppose we have the network structure together with the vertex attribute data, that is, the information assigned to each vertex associated with the community to which it belongs. The problem addressed this paper is the detection of the community structure from the information of both the network structure and the vertex attribute data. Our approach is based on the Bayesian approach that models the posterior probability distribution of the community labels. The detection of the community structure in our method is achieved by using belief propagation and an EM algorithm. We numerically verified the performance of our method using computer-generated networks and real-world networks.

  3. Scintillation index of a stochastic electromagnetic beam propagating in random media

    NASA Astrophysics Data System (ADS)

    Korotkova, Olga

    2008-05-01

    We study the behavior of the scintillation index (the normalized variance of fluctuating intensity) of a wide-sense statistically stationary, quasi-monochromatic, electromagnetic beam propagating in a homogeneous isotropic medium. In particular, we show that in the case when the beam is treated electromagnetically apart from the correlation properties of the medium in which the beam travels not only its degree of coherence but also its degree of polarization in the source plane can affect the values of the scintillation index along the propagation path. We find that, generally, beams generated by unpolarized sources have reduced level of scintillation, compared with beams generated by fully polarized sources, provided they have the same intensity distribution and the same state of coherence in the source plane. An example illustrating the theory is considered which examines how the scintillation index of an electromagnetic Gaussian Schell-model beam propagates in the turbulent atmosphere. These results may find applications in optical communications through random media and in remote sensing.

  4. A surrogate-primary replacement algorithm for response-adaptive randomization in stroke clinical trials.

    PubMed

    Nowacki, Amy S; Zhao, Wenle; Palesch, Yuko Y

    2015-01-12

    Response-adaptive randomization (RAR) offers clinical investigators benefit by modifying the treatment allocation probabilities to optimize the ethical, operational, or statistical performance of the trial. Delayed primary outcomes and their effect on RAR have been studied in the literature; however, the incorporation of surrogate outcomes has not been fully addressed. We explore the benefits and limitations of surrogate outcome utilization in RAR in the context of acute stroke clinical trials. We propose a novel surrogate-primary (S-P) replacement algorithm where a patient's surrogate outcome is used in the RAR algorithm only until their primary outcome becomes available to replace it. Computer simulations investigate the effect of both the delay in obtaining the primary outcome and the underlying surrogate and primary outcome distributional discrepancies on complete randomization, standard RAR and the S-P replacement algorithm methods. Results show that when the primary outcome is delayed, the S-P replacement algorithm reduces the variability of the treatment allocation probabilities and achieves stabilization sooner. Additionally, the S-P replacement algorithm benefit proved to be robust in that it preserved power and reduced the expected number of failures across a variety of scenarios.

  5. Nonconvergence of the Wang-Landau algorithms with multiple random walkers.

    PubMed

    Belardinelli, R E; Pereyra, V D

    2016-05-01

    This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1/t algorithms. The classical algorithms are modified by the use of m-independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t; then, the average over m walkers is performed. It is observed that the error goes as 1/sqrt[m]. However, if the number of walkers increases above a certain critical value m>m_{x}, the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1/t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value m_{x}, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.

  6. A Stochastic Optimization Algorithm using Intelligent Agents: With Constraints and Rate of Convergence

    DTIC Science & Technology

    2010-11-01

    application type of analysis, only the methodology is presented here, which includes an algorithm for optimization and a corresponding conservative rate...of convergence based on no learning. The application part will be presented in the near future once data are available. It is expected that the...flux particuliers entre des paires de nœuds particulières. Bien qu’il s’agisse d’un type de mise en application d’analyse, seulement les méthodologies

  7. Extraction of Cole parameters from the electrical bioimpedance spectrum using stochastic optimization algorithms.

    PubMed

    Gholami-Boroujeny, Shiva; Bolic, Miodrag

    2016-04-01

    Fitting the measured bioimpedance spectroscopy (BIS) data to the Cole model and then extracting the Cole parameters is a common practice in BIS applications. The extracted Cole parameters then can be analysed as descriptors of tissue electrical properties. To have a better evaluation of physiological or pathological properties of biological tissue, accurate extraction of Cole parameters is of great importance. This paper proposes an improved Cole parameter extraction based on bacterial foraging optimization (BFO) algorithm. We employed simulated datasets to test the performance of the BFO fitting method regarding parameter extraction accuracy and noise sensitivity, and we compared the results with those of a least squares (LS) fitting method. The BFO method showed better robustness to the noise and higher accuracy in terms of extracted parameters. In addition, we applied our method to experimental data where bioimpedance measurements were obtained from forearm in three different positions of the arm. The goal of the experiment was to explore how robust Cole parameters are in classifying position of the arm for different people, and measured at different times. The extracted Cole parameters obtained by LS and BFO methods were applied to different classifiers. Two other evolutionary algorithms, GA and PSO were also used for comparison purpose. We showed that when the classifiers are fed with the extracted feature sets by BFO fitting method, higher accuracy is obtained both when applying on training data and test data.

  8. Rotorcraft Blade Mode Damping Identification from Random Responses Using a Recursive Maximum Likelihood Algorithm

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1982-01-01

    An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.

  9. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    SciTech Connect

    Maire, Sylvain; Simon, Martin

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance of the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.

  10. Search Control Algorithm Based on Random Step Size Hill-Climbing Method for Adaptive PMD Compensation

    NASA Astrophysics Data System (ADS)

    Tanizawa, Ken; Hirose, Akira

    Adaptive polarization mode dispersion (PMD) compensation is required for the speed-up and advancement of the present optical communications. The combination of a tunable PMD compensator and its adaptive control method achieves adaptive PMD compensation. In this paper, we report an effective search control algorithm for the feedback control of the PMD compensator. The algorithm is based on the hill-climbing method. However, the step size changes randomly to prevent the convergence from being trapped at a local maximum or a flat, unlike the conventional hill-climbing method. The randomness depends on the Gaussian probability density functions. We conducted transmission simulations at 160Gb/s and the results show that the proposed method provides more optimal compensator control than the conventional hill-climbing method.

  11. Enhancing network robustness against targeted and random attacks using a memetic algorithm

    NASA Astrophysics Data System (ADS)

    Tang, Xianglong; Liu, Jing; Zhou, Mingxing

    2015-08-01

    In the past decades, there has been much interest in the elasticity of infrastructures to targeted and random attacks. In the recent work by Schneider C. M. et al., Proc. Natl. Acad. Sci. U.S.A., 108 (2011) 3838, the authors proposed an effective measure (namely R, here we label it as R t to represent the measure for targeted attacks) to evaluate network robustness against targeted node attacks. Using a greedy algorithm, they found that the optimal structure is an onion-like one. However, real systems are often under threats of both targeted attacks and random failures. So, enhancing networks robustness against both targeted and random attacks is of great importance. In this paper, we first design a random-robustness index (Rr) . We find that the onion-like networks destroyed the original strong ability of BA networks in resisting random attacks. Moreover, the structure of an R r -optimized network is found to be different from that of an onion-like network. To design robust scale-free networks (RSF) which are resistant to both targeted and random attacks (TRA) without changing the degree distribution, a memetic algorithm (MA) is proposed, labeled as \\textit{MA-RSF}\\textit{TRA} . In the experiments, both synthetic scale-free networks and real-world networks are used to validate the performance of \\textit{MA-RSF}\\textit{TRA} . The results show that \\textit{MA-RSF} \\textit{TRA} has a great ability in searching for the most robust network structure that is resistant to both targeted and random attacks.

  12. Performance of the quantum adiabatic algorithm on random instances of two optimization problems on regular hypergraphs

    NASA Astrophysics Data System (ADS)

    Farhi, Edward; Gosset, David; Hen, Itay; Sandvik, A. W.; Shor, Peter; Young, A. P.; Zamponi, Francesco

    2012-11-01

    In this paper we study the performance of the quantum adiabatic algorithm on random instances of two combinatorial optimization problems, 3-regular 3-XORSAT and 3-regular max-cut. The cost functions associated with these two clause-based optimization problems are similar as they are both defined on 3-regular hypergraphs. For 3-regular 3-XORSAT the clauses contain three variables and for 3-regular max-cut the clauses contain two variables. The quantum adiabatic algorithms we study for these two problems use interpolating Hamiltonians which are amenable to sign-problem free quantum Monte Carlo and quantum cavity methods. Using these techniques we find that the quantum adiabatic algorithm fails to solve either of these problems efficiently, although for different reasons.

  13. Representation of high frequency Space Shuttle data by ARMA algorithms and random response spectra

    NASA Technical Reports Server (NTRS)

    Spanos, P. D.; Mushung, L. J.

    1990-01-01

    High frequency Space Shuttle lift-off data are treated by autoregressive (AR) and autoregressive-moving-average (ARMA) digital algorithms. These algorithms provide useful information on the spectral densities of the data. Further, they yield spectral models which lend themselves to incorporation to the concept of the random response spectrum. This concept yields a reasonably smooth power spectrum for the design of structural and mechanical systems when the available data bank is limited. Due to the non-stationarity of the lift-off event, the pertinent data are split into three slices. Each of the slices is associated with a rather distinguishable phase of the lift-off event, where stationarity can be expected. The presented results are rather preliminary in nature; it is aimed to call attention to the availability of the discussed digital algorithms and to the need to augment the Space Shuttle data bank as more flights are completed.

  14. The backtracking survey propagation algorithm for solving random K-SAT problems

    PubMed Central

    Marino, Raffaele; Parisi, Giorgio; Ricci-Tersenghi, Federico

    2016-01-01

    Discrete combinatorial optimization has a central role in many scientific disciplines, however, for hard problems we lack linear time algorithms that would allow us to solve very large instances. Moreover, it is still unclear what are the key features that make a discrete combinatorial optimization problem hard to solve. Here we study random K-satisfiability problems with K=3,4, which are known to be very hard close to the SAT-UNSAT threshold, where problems stop having solutions. We show that the backtracking survey propagation algorithm, in a time practically linear in the problem size, is able to find solutions very close to the threshold, in a region unreachable by any other algorithm. All solutions found have no frozen variables, thus supporting the conjecture that only unfrozen solutions can be found in linear time, and that a problem becomes impossible to solve in linear time when all solutions contain frozen variables. PMID:27694952

  15. The backtracking survey propagation algorithm for solving random K-SAT problems

    NASA Astrophysics Data System (ADS)

    Marino, Raffaele; Parisi, Giorgio; Ricci-Tersenghi, Federico

    2016-10-01

    Discrete combinatorial optimization has a central role in many scientific disciplines, however, for hard problems we lack linear time algorithms that would allow us to solve very large instances. Moreover, it is still unclear what are the key features that make a discrete combinatorial optimization problem hard to solve. Here we study random K-satisfiability problems with K=3,4, which are known to be very hard close to the SAT-UNSAT threshold, where problems stop having solutions. We show that the backtracking survey propagation algorithm, in a time practically linear in the problem size, is able to find solutions very close to the threshold, in a region unreachable by any other algorithm. All solutions found have no frozen variables, thus supporting the conjecture that only unfrozen solutions can be found in linear time, and that a problem becomes impossible to solve in linear time when all solutions contain frozen variables.

  16. Stochastic resonance whole body vibration increases perceived muscle relaxation but not cardiovascular activation: A randomized controlled trial

    PubMed Central

    Elfering, Achim; Burger, Christian; Schade, Volker; Radlinger, Lorenz

    2016-01-01

    AIM To investigate the acute effects of stochastic resonance whole body vibration (SR-WBV), including muscle relaxation and cardiovascular activation. METHODS Sixty-four healthy students participated. The participants were randomly assigned to sham SR-WBV training at a low intensity (1.5 Hz) or a verum SR-WBV training at a higher intensity (5 Hz). Systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR) and self-reported muscle relaxation were assessed before and immediately after SR-WBV. RESULTS Two factorial analyses of variance (ANOVA) showed a significant interaction between pre- vs post-SR-WBV measurements and SR-WBV conditions for muscle relaxation in the neck and back [F(1,55) = 3.35, P = 0.048, η2 = 0.07]. Muscle relaxation in the neck and back increased in verum SR-WBV, but not in sham SR-WBV. No significant changes between pre- and post-training levels of SBD, DBD and HR were observed either in sham or verum SR-WBV conditions. With verum SR-WBV, improved muscle relaxation was the most significant in participants who reported the experience of back, neck or shoulder pain more than once a month (P < 0.05). CONCLUSION A single session of SR-WBV increased muscle relaxation in young healthy individuals, while cardiovascular load was low. An increase in musculoskeletal relaxation in the neck and back is a potential mediator of pain reduction in preventive worksite SR-WBV trials. PMID:27900274

  17. Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Dong, Bin; Wen, Zaiwen

    2017-02-01

    In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.

  18. Three-Dimensional Analysis of the Effect of Material Randomness on the Damage Behaviour of CFRP Laminates with Stochastic Cohesive-Zone Elements

    NASA Astrophysics Data System (ADS)

    Khokhar, Zahid R.; Ashcroft, Ian A.; Silberschmidt, Vadim V.

    2014-02-01

    Laminated carbon fibre-reinforced polymer (CFRP) composites are already well established in structural applications where high specific strength and stiffness are required. Damage in these laminates is usually localised and may involve numerous mechanisms, such as matrix cracking, laminate delamination, fibre de-bonding or fibre breakage. Microstructures in CFRPs are non-uniform and irregular, resulting in an element of randomness in the localised damage. This may in turn affect the global properties and failure parameters of components made of CFRPs. This raises the question of whether the inherent stochasticity of localised damage is of significance in terms of the global properties and design methods for such materials. This paper presents a numerical modelling based analysis of the effect of material randomness on delamination damage in CFRP materials by the implementation of a stochastic cohesive-zone model (CZM) within the framework of the finite-element (FE) method. The initiation and propagation of delamination in a unidirectional CFRP double-cantilever beam (DCB) specimen loaded under mode-I was analyzed, accounting for the inherent microstructural stochasticity exhibited by such laminates via the stochastic CZM. Various statistical realizations for a half-scatter of 50 % of fracture energy were performed, with a probability distribution based on Weibull's two-parameter probability density function. The damaged area and the crack lengths in laminates were analyzed, and the results showed higher values of those parameters for random realizations compared to the uniform case for the same levels of applied displacement. This indicates that deterministic analysis of composites using average properties may be non-conservative and a method based on probability may be more appropriate.

  19. Stochastic Pseudo-Boolean Optimization

    DTIC Science & Technology

    2011-07-31

    analysis of two-stage stochastic minimum s-t cut problems; (iv) exact solution algorithm for a class of stochastic bilevel knapsack problems; (v) exact...57 5 Bilevel Knapsack Problems with Stochastic Right-Hand Sides 58 6 Two-Stage Stochastic Assignment Problems 59 6.1 Introduction...programming formulations and related computational complexity issues. • Section 5 considers a specific stochastic extension of the bilevel knapsack

  20. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  1. Random generation of periodic hard ellipsoids based on molecular dynamics: A computationally-efficient algorithm

    NASA Astrophysics Data System (ADS)

    Ghossein, Elias; Lévesque, Martin

    2013-11-01

    This paper presents a computationally-efficient algorithm for generating random periodic packings of hard ellipsoids. The algorithm is based on molecular dynamics where the ellipsoids are set in translational and rotational motion and their volumes gradually increase. Binary collision times are computed by simply finding the roots of a non-linear function. In addition, an original and efficient method to compute the collision time between an ellipsoid and a cube face is proposed. The algorithm can generate all types of ellipsoids (prolate, oblate and scalene) with very high aspect ratios (i.e., >10). It is the first time that such packings are reported in the literature. Orientations tensors were computed for the generated packings and it has been shown that ellipsoids had a uniform distribution of orientations. Moreover, it seems that for low aspect ratios (i.e., ⩽10), the volume fraction is the most influential parameter on the algorithm CPU time. For higher aspect ratios, the influence of the latter becomes as important as the volume fraction. All necessary pseudo-codes are given so that the reader can easily implement the algorithm.

  2. Effects of time delay and random rewiring on the stochastic resonance in excitable small-world neuronal networks

    NASA Astrophysics Data System (ADS)

    Yu, Haitao; Wang, Jiang; Du, Jiwei; Deng, Bin; Wei, Xile; Liu, Chen

    2013-05-01

    The effects of time delay and rewiring probability on stochastic resonance and spatiotemporal order in small-world neuronal networks are studied in this paper. Numerical results show that, irrespective of the pacemaker introduced to one single neuron or all neurons of the network, the phenomenon of stochastic resonance occurs. The time delay in the coupling process can either enhance or destroy stochastic resonance on small-world neuronal networks. In particular, appropriately tuned delays can induce multiple stochastic resonances, which appear intermittently at integer multiples of the oscillation period of the pacemaker. More importantly, it is found that the small-world topology can significantly affect the stochastic resonance on excitable neuronal networks. For small time delays, increasing the rewiring probability can largely enhance the efficiency of pacemaker-driven stochastic resonance. We argue that the time delay and the rewiring probability both play a key role in determining the ability of the small-world neuronal network to improve the noise-induced outreach of the localized subthreshold pacemaker.

  3. Effects of time delay and random rewiring on the stochastic resonance in excitable small-world neuronal networks.

    PubMed

    Yu, Haitao; Wang, Jiang; Du, Jiwei; Deng, Bin; Wei, Xile; Liu, Chen

    2013-05-01

    The effects of time delay and rewiring probability on stochastic resonance and spatiotemporal order in small-world neuronal networks are studied in this paper. Numerical results show that, irrespective of the pacemaker introduced to one single neuron or all neurons of the network, the phenomenon of stochastic resonance occurs. The time delay in the coupling process can either enhance or destroy stochastic resonance on small-world neuronal networks. In particular, appropriately tuned delays can induce multiple stochastic resonances, which appear intermittently at integer multiples of the oscillation period of the pacemaker. More importantly, it is found that the small-world topology can significantly affect the stochastic resonance on excitable neuronal networks. For small time delays, increasing the rewiring probability can largely enhance the efficiency of pacemaker-driven stochastic resonance. We argue that the time delay and the rewiring probability both play a key role in determining the ability of the small-world neuronal network to improve the noise-induced outreach of the localized subthreshold pacemaker.

  4. Stochastic differential equations

    SciTech Connect

    Sobczyk, K. )

    1990-01-01

    This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshore structures.

  5. Stochastic modeling of carbon oxidation

    SciTech Connect

    Chen, W.Y,; Kulkarni, A.; Milum, J.L.; Fan, L.T.

    1999-12-01

    Recent studies of carbon oxidation by scanning tunneling microscopy indicate that measured rates of carbon oxidation can be affected by randomly distributed defects in the carbon structure, which vary in size. Nevertheless, the impact of this observation on the analysis or modeling of the oxidation rate has not been critically assessed. This work focuses on the stochastic analysis of the dynamics of carbon clusters' conversions during the oxidation of a carbon sheet. According to the classic model of Nagle and Strickland-Constable (NSC), two classes of carbon clusters are involved in three types of reactions: gasification of basal-carbon clusters, gasification of edge-carbon clusters, and conversion of the edge-carbon clusters to the basal-carbon clusters due to the thermal annealing. To accommodate the dilution of basal clusters, however, the NSC model is modified for the later stage of oxidation in this work. Master equations governing the numbers of three classes of carbon clusters, basal, edge and gasified, are formulated from stochastic population balance. The stochastic pathways of three different classes of carbon during oxidation, that is, their means and the fluctuations around these means, have been numerically simulated independently by the algorithm derived from the master equations, as well as by an event-driven Monte Carlo algorithm. Both algorithms have given rise to identical results.

  6. A well-posed and stable stochastic Galerkin formulation of the incompressible Navier–Stokes equations with random data

    SciTech Connect

    Pettersson, Per; Nordström, Jan; Doostan, Alireza

    2016-02-01

    We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimate for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.

  7. Parallel and deterministic algorithms from MRFs (Markov Random Fields): Surface reconstruction and integration. Memorandum report

    SciTech Connect

    Geiger, D.; Girosi, F.

    1989-05-01

    In recent years many researchers have investigated the use of Markov random fields (MRFs) for computer vision. They can be applied for example in the output of the visual processes to reconstruct surfaces from sparse and noisy depth data, or to integrate early vision processes to label physical discontinuities. Drawbacks of MRFs models have been the computational complexity of the implementation and the difficulty in estimating the parameters of the model. This paper derives deterministic approximations to MRFs models. One of the considered models is shown to give in a natural way the graduate non convexity (GNC) algorithm. This model can be applied to smooth a field preserving its discontinuities. A new model is then proposed: it allows the gradient of the field to be enhanced at the discontinuities and smoothed elsewhere. All the theoretical results are obtained in the framework of the mean field theory, that is a well known statistical mechanics technique. A fast, parallel, and iterative algorithm to solve the deterministic equations of the two models is presented, together with experiments on synthetic and real images. The algorithm is applied to the problem of surface reconstruction is in the case of sparse data. A fast algorithm is also described that solves the problem of aligning the discontinuities of different visual models with intensity edges via integration.

  8. Optical double image security using random phase fractional Fourier domain encoding and phase-retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Rajput, Sudheesh K.; Nishchal, Naveen K.

    2017-04-01

    We propose a novel security scheme based on the double random phase fractional domain encoding (DRPE) and modified Gerchberg-Saxton (G-S) phase retrieval algorithm for securing two images simultaneously. Any one of the images to be encrypted is converted into a phase-only image using modified G-S algorithm and this function is used as a key for encrypting another image. The original images are retrieved employing the concept of known-plaintext attack and following the DRPE decryption steps with all correct keys. The proposed scheme is also used for encryption of two color images with the help of convolution theorem and phase-truncated fractional Fourier transform. With some modification, the scheme is extended for simultaneous encryption of gray-scale and color images. As a proof-of-concept, simulation results have been presented for securing two gray-scale images, two color images, and simultaneous gray-scale and color images.

  9. Modification of the random forest algorithm to avoid statistical dependence problems when classifying remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando

    2017-06-01

    Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.

  10. A novel chaotic block image encryption algorithm based on dynamic random growth technique

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Liu, Lintao; Zhang, Yingqian

    2015-03-01

    This paper proposes a new block image encryption scheme based on hybrid chaotic maps and dynamic random growth technique. Since cat map is periodic and can be easily cracked by chosen plaintext attack, we use cat map in another securer way, which can completely eliminate the cyclical phenomenon and resist chosen plaintext attack. In the diffusion process, an intermediate parameter is calculated according to the image block. The intermediate parameter is used as the initial parameter of chaotic map to generate random data stream. In this way, the generated key streams are dependent on the plaintext image, which can resist the chosen plaintext attack. The experiment results prove that the proposed encryption algorithm is secure enough to be used in image transmission systems.

  11. Random search algorithm for solving the nonlinear Fredholm integral equations of the second kind.

    PubMed

    Hong, Zhimin; Yan, Zaizai; Yan, Jiao

    2014-01-01

    In this paper, a randomized numerical approach is used to obtain approximate solutions for a class of nonlinear Fredholm integral equations of the second kind. The proposed approach contains two steps: at first, we define a discretized form of the integral equation by quadrature formula methods and solution of this discretized form converges to the exact solution of the integral equation by considering some conditions on the kernel of the integral equation. And then we convert the problem to an optimal control problem by introducing an artificial control function. Following that, in the next step, solution of the discretized form is approximated by a kind of Monte Carlo (MC) random search algorithm. Finally, some examples are given to show the efficiency of the proposed approach.

  12. Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm.

    PubMed

    Bossard, Jeremy A; Lin, Lan; Werner, Douglas H

    2016-01-01

    Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as 'chaotic', but we propose that apparent 'chaotic' natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too 'perfect' to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the 'chaotic' (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and 'chaotic' superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime.

  13. Evolving random fractal Cantor superlattices for the infrared using a genetic algorithm

    PubMed Central

    Bossard, Jeremy A.; Lin, Lan; Werner, Douglas H.

    2016-01-01

    Ordered and chaotic superlattices have been identified in Nature that give rise to a variety of colours reflected by the skin of various organisms. In particular, organisms such as silvery fish possess superlattices that reflect a broad range of light from the visible to the UV. Such superlattices have previously been identified as ‘chaotic’, but we propose that apparent ‘chaotic’ natural structures, which have been previously modelled as completely random structures, should have an underlying fractal geometry. Fractal geometry, often described as the geometry of Nature, can be used to mimic structures found in Nature, but deterministic fractals produce structures that are too ‘perfect’ to appear natural. Introducing variability into fractals produces structures that appear more natural. We suggest that the ‘chaotic’ (purely random) superlattices identified in Nature are more accurately modelled by multi-generator fractals. Furthermore, we introduce fractal random Cantor bars as a candidate for generating both ordered and ‘chaotic’ superlattices, such as the ones found in silvery fish. A genetic algorithm is used to evolve optimal fractal random Cantor bars with multiple generators targeting several desired optical functions in the mid-infrared and the near-infrared. We present optimized superlattices demonstrating broadband reflection as well as single and multiple pass bands in the near-infrared regime. PMID:26763335

  14. Resolution for Stochastic Boolean Satisfiability

    NASA Astrophysics Data System (ADS)

    Teige, Tino; Fränzle, Martin

    The stochastic Boolean satisfiability (SSAT) problem was introduced by Papadimitriou in 1985 by adding a probabilistic model of uncertainty to propositional satisfiability through randomized quantification. SSAT has many applications, e.g., in probabilistic planning and, more recently by integrating arithmetic, in probabilistic model checking. In this paper, we first present a new result on the computational complexity of SSAT: SSAT remains PSPACE-complete even for its restriction to 2CNF. Second, we propose a sound and complete resolution calculus for SSAT complementing the classical backtracking search algorithms.

  15. Stochastic gravity

    NASA Astrophysics Data System (ADS)

    Ross, D. K.; Moreau, William

    1995-08-01

    We investigate stochastic gravity as a potentially fruitful avenue for studying quantum effects in gravity. Following the approach of stochastic electrodynamics ( sed), as a representation of the quantum gravity vacuum we construct a classical state of isotropic random gravitational radiation, expressed as a spin-2 field,h µυ (x), composed of plane waves of random phase on a flat spacetime manifold. Requiring Lorentz invariance leads to the result that the spectral composition function of the gravitational radiation,h(ω), must be proportional to 1/ω 2. The proportionality constant is determined by the Planck condition that the energy density consist ofħω/2 per normal mode, and this condition sets the amplitude scale of the random gravitational radiation at the order of the Planck length, giving a spectral composition functionh(ω) =√16πc 2Lp/ω2. As an application of stochastic gravity, we investigate the Davies-Unruh effect. We calculate the two-point correlation function (R iojo(Oτ-δτ/2)R kolo(O,τ+δτ/2)) of the measureable geodesic deviation tensor field,R iojo, for two situations: (i) at a point detector uniformly accelerating through the random gravitational radiation, and (ii) at an inertial detector in a heat bath of the random radiation at a finite temperature. We find that the two correlation functions agree to first order inaδτ/c provided that the temperature and acceleration satisfy the relationkT=ħa/2πc.

  16. Fault diagnosis in spur gears based on genetic algorithm and random forest

    NASA Astrophysics Data System (ADS)

    Cerrada, Mariela; Zurita, Grover; Cabrera, Diego; Sánchez, René-Vinicio; Artés, Mariano; Li, Chuan

    2016-03-01

    There are growing demands for condition-based monitoring of gearboxes, and therefore new methods to improve the reliability, effectiveness, accuracy of the gear fault detection ought to be evaluated. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance of the diagnostic models. On the other hand, random forest classifiers are suitable models in industrial environments where large data-samples are not usually available for training such diagnostic models. The main aim of this research is to build up a robust system for the multi-class fault diagnosis in spur gears, by selecting the best set of condition parameters on time, frequency and time-frequency domains, which are extracted from vibration signals. The diagnostic system is performed by using genetic algorithms and a classifier based on random forest, in a supervised environment. The original set of condition parameters is reduced around 66% regarding the initial size by using genetic algorithms, and still get an acceptable classification precision over 97%. The approach is tested on real vibration signals by considering several fault classes, one of them being an incipient fault, under different running conditions of load and velocity.

  17. Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm

    NASA Astrophysics Data System (ADS)

    Kaczałek, B.; Borkowski, A.

    2016-06-01

    The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.

  18. Cooperative mobile agents search using beehive partitioned structure and Tabu Random search algorithm

    NASA Astrophysics Data System (ADS)

    Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.

    2013-05-01

    In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.

  19. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is

  20. Recursive random forest algorithm for constructing multilayered hierarchical gene regulatory networks that govern biological pathways

    PubMed Central

    Zhang, Kui; Busov, Victor; Wei, Hairong

    2017-01-01

    Background Present knowledge indicates a multilayered hierarchical gene regulatory network (ML-hGRN) often operates above a biological pathway. Although the ML-hGRN is very important for understanding how a pathway is regulated, there is almost no computational algorithm for directly constructing ML-hGRNs. Results A backward elimination random forest (BWERF) algorithm was developed for constructing the ML-hGRN operating above a biological pathway. For each pathway gene, the BWERF used a random forest model to calculate the importance values of all transcription factors (TFs) to this pathway gene recursively with a portion (e.g. 1/10) of least important TFs being excluded in each round of modeling, during which, the importance values of all TFs to the pathway gene were updated and ranked until only one TF was remained in the list. The above procedure, termed BWERF. After that, the importance values of a TF to all pathway genes were aggregated and fitted to a Gaussian mixture model to determine the TF retention for the regulatory layer immediately above the pathway layer. The acquired TFs at the secondary layer were then set to be the new bottom layer to infer the next upper layer, and this process was repeated until a ML-hGRN with the expected layers was obtained. Conclusions BWERF improved the accuracy for constructing ML-hGRNs because it used backward elimination to exclude the noise genes, and aggregated the individual importance values for determining the TFs retention. We validated the BWERF by using it for constructing ML-hGRNs operating above mouse pluripotency maintenance pathway and Arabidopsis lignocellulosic pathway. Compared to GENIE3, BWERF showed an improvement in recognizing authentic TFs regulating a pathway. Compared to the bottom-up Gaussian graphical model algorithm we developed for constructing ML-hGRNs, the BWERF can construct ML-hGRNs with significantly reduced edges that enable biologists to choose the implicit edges for experimental

  1. An efficient voting algorithm for finding additive biclusters with random background.

    PubMed

    Xiao, Jing; Wang, Lusheng; Liu, Xiaowen; Jiang, Tao

    2008-12-01

    The biclustering problem has been extensively studied in many areas, including e-commerce, data mining, machine learning, pattern recognition, statistics, and, more recently, computational biology. Given an n x m matrix A (n >or= m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard for various objective functions. In this article, we study a probabilistic model for the implanted additive bicluster problem, where each element in the n x m background matrix is a random integer from [0, L - 1] for some integer L, and a k x k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L - 1] with probability theta. We propose an O(n(2)m) time algorithm based on voting to solve the problem. We show that when k >or= Omega(square root of (n log n)), the voting algorithm can correctly find the implanted bicluster with probability at least 1 - (9/n(2)). We also implement our algorithm as a C++ program named VOTE. The implementation incorporates several ideas for estimating the size of an implanted bicluster, adjusting the threshold in voting, dealing with small biclusters, and dealing with overlapping implanted biclusters. Our experimental results on both simulated and real datasets show that VOTE can find biclusters with a high accuracy and speed.

  2. Identifying and Analyzing Novel Epilepsy-Related Genes Using Random Walk with Restart Algorithm

    PubMed Central

    Guo, Wei; Shang, Dong-Mei; Cao, Jing-Hui; Feng, Kaiyan; Wang, ShaoPeng

    2017-01-01

    As a pathological condition, epilepsy is caused by abnormal neuronal discharge in brain which will temporarily disrupt the cerebral functions. Epilepsy is a chronic disease which occurs in all ages and would seriously affect patients' personal lives. Thus, it is highly required to develop effective medicines or instruments to treat the disease. Identifying epilepsy-related genes is essential in order to understand and treat the disease because the corresponding proteins encoded by the epilepsy-related genes are candidates of the potential drug targets. In this study, a pioneering computational workflow was proposed to predict novel epilepsy-related genes using the random walk with restart (RWR) algorithm. As reported in the literature RWR algorithm often produces a number of false positive genes, and in this study a permutation test and functional association tests were implemented to filter the genes identified by RWR algorithm, which greatly reduce the number of suspected genes and result in only thirty-three novel epilepsy genes. Finally, these novel genes were analyzed based upon some recently published literatures. Our findings implicate that all novel genes were closely related to epilepsy. It is believed that the proposed workflow can also be applied to identify genes related to other diseases and deepen our understanding of the mechanisms of these diseases. PMID:28255556

  3. Randomized selection on the GPU

    SciTech Connect

    Monroe, Laura Marie; Wendelberger, Joanne R; Michalak, Sarah E

    2011-01-13

    We implement here a fast and memory-sparing probabilistic top N selection algorithm on the GPU. To our knowledge, this is the first direct selection in the literature for the GPU. The algorithm proceeds via a probabilistic-guess-and-chcck process searching for the Nth element. It always gives a correct result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces the average time required for the algorithm. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be well suited to more general parallel processors with limited amounts of fast memory.

  4. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    DOE PAGES

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; ...

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erencesmore » in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.« less

  5. Comparing Algorithms for Graph Isomorphism Using Discrete- and Continuous-Time Quantum Random Walks

    SciTech Connect

    Rudinger, Kenneth; Gamble, John King; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-07-01

    Berry and Wang [Phys. Rev. A 83, 042317 (2011)] show numerically that a discrete-time quan- tum random walk of two noninteracting particles is able to distinguish some non-isomorphic strongly regular graphs from the same family. Here we analytically demonstrate how it is possible for these walks to distinguish such graphs, while continuous-time quantum walks of two noninteracting parti- cles cannot. We show analytically and numerically that even single-particle discrete-time quantum random walks can distinguish some strongly regular graphs, though not as many as two-particle noninteracting discrete-time walks. Additionally, we demonstrate how, given the same quantum random walk, subtle di erences in the graph certi cate construction algorithm can nontrivially im- pact the walk's distinguishing power. We also show that no continuous-time walk of a xed number of particles can distinguish all strongly regular graphs when used in conjunction with any of the graph certi cates we consider. We extend this constraint to discrete-time walks of xed numbers of noninteracting particles for one kind of graph certi cate; it remains an open question as to whether or not this constraint applies to the other graph certi cates we consider.

  6. Addressing methodological challenges in implementing the nursing home pain management algorithm randomized controlled trial

    PubMed Central

    Ersek, Mary; Polissar, Nayak; Du Pen, Anna; Jablonski, Anita; Herr, Keela; Neradilek, Moni B

    2015-01-01

    Background Unrelieved pain among nursing home (NH) residents is a well-documented problem. Attempts have been made to enhance pain management for older adults, including those in NHs. Several evidence-based clinical guidelines have been published to assist providers in assessing and managing acute and chronic pain in older adults. Despite the proliferation and dissemination of these practice guidelines, research has shown that intensive systems-level implementation strategies are necessary to change clinical practice and patient outcomes within a health-care setting. One promising approach is the embedding of guidelines into explicit protocols and algorithms to enhance decision making. Purpose The goal of the article is to describe several issues that arose in the design and conduct of a study that compared the effectiveness of pain management algorithms coupled with a comprehensive adoption program versus the effectiveness of education alone in improving evidence-based pain assessment and management practices, decreasing pain and depressive symptoms, and enhancing mobility among NH residents. Methods The study used a cluster-randomized controlled trial (RCT) design in which the individual NH was the unit of randomization. The Roger's Diffusion of Innovations theory provided the framework for the intervention. Outcome measures were surrogate-reported usual pain, self-reported usual and worst pain, and self-reported pain-related interference with activities, depression, and mobility. Results The final sample consisted of 485 NH residents from 27 NHs. The investigators were able to use a staggered enrollment strategy to recruit and retain facilities. The adaptive randomization procedures were successful in balancing intervention and control sites on key NH characteristics. Several strategies were successfully implemented to enhance the adoption of the algorithm. Limitations/Lessons The investigators encountered several methodological challenges that were inherent to

  7. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    NASA Astrophysics Data System (ADS)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  8. Fast conical surface evaluation via randomized algorithm in the null-screen test

    NASA Astrophysics Data System (ADS)

    Aguirre-Aguirre, D.; Díaz-Uribe, R.; Villalobos-Mendoza, B.

    2017-01-01

    This work shows a method to recover the shape of the surface via randomized algorithms when the null-screen test is used, instead of the integration process that is commonly performed. This, because the majority of the errors are added during the reconstruction of the surface (or the integration process). This kind of large surfaces are widely used in the aerospace sector and industry in general, and a big problem exists when these surfaces have to be tested. The null-screen method is a low-cost test, and a complete surface analysis can be done by using this method. In this paper, we show the simulations done for the analysis of fast conic surfaces, where it was proved that the quality and shape of a surface under study can be recovered with a percentage error < 2.

  9. Simulation of Anderson localization in a random fiber using a fast Fresnel diffraction algorithm

    NASA Astrophysics Data System (ADS)

    Davis, Jeffrey A.; Cottrell, Don M.

    2016-06-01

    Anderson localization has been previously demonstrated both theoretically and experimentally for transmission of a Gaussian beam through long distances in an optical fiber consisting of a random array of smaller fibers, each having either a higher or lower refractive index. However, the computational times were extremely long. We show how to simulate these results using a fast Fresnel diffraction algorithm. In each iteration of this approach, the light passes through a phase mask, undergoes Fresnel diffraction over a small distance, and then passes through the same phase mask. We also show results where we use a binary amplitude mask at the input that selectively illuminates either the higher or the lower index fibers. Additionally, we examine imaging of various sized objects through these fibers. In all cases, our results are consistent with other computational methods and experimental results, but with a much reduced computational time.

  10. Development and Evaluation of a New Air Exchange Rate Algorithm for the Stochastic Human Exposure and Dose Simulation Model

    EPA Science Inventory

    between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure and Dose Simulation (SHEDS) model is a population exposure model that uses a pro...

  11. An improved random walk algorithm for the implicit Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Keady, Kendra P.; Cleveland, Mathew A.

    2017-01-01

    In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in "fully-gray" form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities are a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2-4 compared to standard RW, and a factor of ∼3-6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.

  12. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    SciTech Connect

    Yao, W; Farr, J

    2015-06-15

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations.

  13. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    NASA Astrophysics Data System (ADS)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  14. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  15. Harmonics elimination algorithm for operational modal analysis using random decrement technique

    NASA Astrophysics Data System (ADS)

    Modak, S. V.; Rawal, Chetan; Kundra, T. K.

    2010-05-01

    Operational modal analysis (OMA) extracts modal parameters of a structure using their output response, during operation in general. OMA, when applied to mechanical engineering structures is often faced with the problem of harmonics present in the output response, and can cause erroneous modal extraction. This paper demonstrates for the first time that the random decrement (RD) method can be efficiently employed to eliminate the harmonics from the randomdec signatures. Further, the research work shows effective elimination of large amplitude harmonics also by proposing inclusion of additional random excitation. This obviously need not be recorded for analysis, as is the case with any other OMA method. The free decays obtained from RD have been used for system modal identification using eigen-system realization algorithm (ERA). The proposed harmonic elimination method has an advantage over previous methods in that it does not require the harmonic frequencies to be known and can be used for multiple harmonics, including periodic signals. The theory behind harmonic elimination is first developed and validated. The effectiveness of the method is demonstrated through a simulated study and then by experimental studies on a beam and a more complex F-shape structure, which resembles in shape to the skeleton of a drilling or milling machine tool. Cases with presence of single and multiple harmonics in the response are considered.

  16. Feature selection for outcome prediction in oesophageal cancer using genetic algorithm and random forest classifier.

    PubMed

    Paul, Desbordes; Su, Ruan; Romain, Modzelewski; Sébastien, Vauclin; Pierre, Vera; Isabelle, Gardin

    2016-12-28

    The outcome prediction of patients can greatly help to personalize cancer treatment. A large amount of quantitative features (clinical exams, imaging, …) are potentially useful to assess the patient outcome. The challenge is to choose the most predictive subset of features. In this paper, we propose a new feature selection strategy called GARF (genetic algorithm based on random forest) extracted from positron emission tomography (PET) images and clinical data. The most relevant features, predictive of the therapeutic response or which are prognoses of the patient survival 3 years after the end of treatment, were selected using GARF on a cohort of 65 patients with a local advanced oesophageal cancer eligible for chemo-radiation therapy. The most relevant predictive results were obtained with a subset of 9 features leading to a random forest misclassification rate of 18±4% and an areas under the of receiver operating characteristic (ROC) curves (AUC) of 0.823±0.032. The most relevant prognostic results were obtained with 8 features leading to an error rate of 20±7% and an AUC of 0.750±0.108. Both predictive and prognostic results show better performances using GARF than using 4 other studied methods.

  17. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  18. A rigorous framework for multiscale simulation of stochastic cellular networks

    PubMed Central

    Chevalier, Michael W.; El-Samad, Hana

    2009-01-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-cell variability even in clonal populations. Stochastic biochemical networks are modeled as continuous time discrete state Markov processes whose probability density functions evolve according to a chemical master equation (CME). The CME is not solvable but for the simplest cases, and one has to resort to kinetic Monte Carlo techniques to simulate the stochastic trajectories of the biochemical network under study. A commonly used such algorithm is the stochastic simulation algorithm (SSA). Because it tracks every biochemical reaction that occurs in a given system, the SSA presents computational difficulties especially when there is a vast disparity in the timescales of the reactions or in the number of molecules involved in these reactions. This is common in cellular networks, and many approximation algorithms have evolved to alleviate the computational burdens of the SSA. Here, we present a rigorously derived modified CME framework based on the partition of a biochemically reacting system into restricted and unrestricted reactions. Although this modified CME decomposition is as analytically difficult as the original CME, it can be naturally used to generate a hierarchy of approximations at different levels of accuracy. Most importantly, some previously derived algorithms are demonstrated to be limiting cases of our formulation. We apply our methods to biologically relevant test systems to demonstrate their accuracy and efficiency. PMID:19673546

  19. PIPS-SBB: A Parallel Distributed-Memory Branch-and-Bound Algorithm for Stochastic Mixed-Integer Programs

    SciTech Connect

    Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak

    2016-05-01

    Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve further as more functionality is added in the future.

  20. PIPS-SBB: A Parallel Distributed-Memory Branch-and-Bound Algorithm for Stochastic Mixed-Integer Programs

    DOE PAGES

    Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak

    2016-05-01

    Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less

  1. Application of Stochastic Labeling with Random-Sequence Barcodes for Simultaneous Quantification and Sequencing of Environmental 16S rRNA Genes

    PubMed Central

    Hoshino, Tatsuhiko; Inagaki, Fumio

    2017-01-01

    Next-generation sequencing (NGS) is a powerful tool for analyzing environmental DNA and provides the comprehensive molecular view of microbial communities. For obtaining the copy number of particular sequences in the NGS library, however, additional quantitative analysis as quantitative PCR (qPCR) or digital PCR (dPCR) is required. Furthermore, number of sequences in a sequence library does not always reflect the original copy number of a target gene because of biases caused by PCR amplification, making it difficult to convert the proportion of particular sequences in the NGS library to the copy number using the mass of input DNA. To address this issue, we applied stochastic labeling approach with random-tag sequences and developed a NGS-based quantification protocol, which enables simultaneous sequencing and quantification of the targeted DNA. This quantitative sequencing (qSeq) is initiated from single-primer extension (SPE) using a primer with random tag adjacent to the 5’ end of target-specific sequence. During SPE, each DNA molecule is stochastically labeled with the random tag. Subsequently, first-round PCR is conducted, specifically targeting the SPE product, followed by second-round PCR to index for NGS. The number of random tags is only determined during the SPE step and is therefore not affected by the two rounds of PCR that may introduce amplification biases. In the case of 16S rRNA genes, after NGS sequencing and taxonomic classification, the absolute number of target phylotypes 16S rRNA gene can be estimated by Poisson statistics by counting random tags incorporated at the end of sequence. To test the feasibility of this approach, the 16S rRNA gene of Sulfolobus tokodaii was subjected to qSeq, which resulted in accurate quantification of 5.0 × 103 to 5.0 × 104 copies of the 16S rRNA gene. Furthermore, qSeq was applied to mock microbial communities and environmental samples, and the results were comparable to those obtained using digital PCR and

  2. Monte Carlo simulation of uncoupled continuous-time random walks yielding a stochastic solution of the space-time fractional diffusion equation.

    PubMed

    Fulger, Daniel; Scalas, Enrico; Germano, Guido

    2008-02-01

    We present a numerical method for the Monte Carlo simulation of uncoupled continuous-time random walks with a Lévy alpha -stable distribution of jumps in space and a Mittag-Leffler distribution of waiting times, and apply it to the stochastic solution of the Cauchy problem for a partial differential equation with fractional derivatives both in space and in time. The one-parameter Mittag-Leffler function is the natural survival probability leading to time-fractional diffusion equations. Transformation methods for Mittag-Leffler random variables were found later than the well-known transformation method by Chambers, Mallows, and Stuck for Lévy alpha -stable random variables and so far have not received as much attention; nor have they been used together with the latter in spite of their mathematical relationship due to the geometric stability of the Mittag-Leffler distribution. Combining the two methods, we obtain an accurate approximation of space- and time-fractional diffusion processes almost as easy and fast to compute as for standard diffusion processes.

  3. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    SciTech Connect

    Soufi, M; Asl, A Kamali; Geramifar, P

    2015-06-15

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and

  4. Identifying subcellular localizations of mammalian protein complexes based on graph theory with a random forest algorithm.

    PubMed

    Li, Zhan-Chao; Lai, Yan-Hua; Chen, Li-Li; Chen, Chao; Xie, Yun; Dai, Zong; Zou, Xiao-Yong

    2013-04-05

    In the post-genome era, one of the most important and challenging tasks is to identify the subcellular localizations of protein complexes, and further elucidate their functions in human health with applications to understand disease mechanisms, diagnosis and therapy. Although various experimental approaches have been developed and employed to identify the subcellular localizations of protein complexes, the laboratory technologies fall far behind the rapid accumulation of protein complexes. Therefore, it is highly desirable to develop a computational method to rapidly and reliably identify the subcellular localizations of protein complexes. In this study, a novel method is proposed for predicting subcellular localizations of mammalian protein complexes based on graph theory with a random forest algorithm. Protein complexes are modeled as weighted graphs containing nodes and edges, where nodes represent proteins, edges represent protein-protein interactions and weights are descriptors of protein primary structures. Some topological structure features are proposed and adopted to characterize protein complexes based on graph theory. Random forest is employed to construct a model and predict subcellular localizations of protein complexes. Accuracies on a training set by a 10-fold cross-validation test for predicting plasma membrane/membrane attached, cytoplasm and nucleus are 84.78%, 71.30%, and 82.00%, respectively. And accuracies for the independent test set are 81.31%, 69.95% and 81.00%, respectively. These high prediction accuracies exhibit the state-of-the-art performance of the current method. It is anticipated that the proposed method may become a useful high-throughput tool and plays a complementary role to the existing experimental techniques in identifying subcellular localizations of mammalian protein complexes. The source code of Matlab and the dataset can be obtained freely on request from the authors.

  5. Mean-field dynamics with stochastic decoherence (MF-SD): A new algorithm for nonadiabatic mixed quantum/classical molecular-dynamics simulations with nuclear-induced decoherence

    NASA Astrophysics Data System (ADS)

    Bedard-Hearn, Michael J.; Larsen, Ross E.; Schwartz, Benjamin J.

    2005-12-01

    The key factors that distinguish algorithms for nonadiabatic mixed quantum/classical (MQC) simulations from each other are how they incorporate quantum decoherence—the fact that classical nuclei must eventually cause a quantum superposition state to collapse into a pure state—and how they model the effects of decoherence on the quantum and classical subsystems. Most algorithms use distinct mechanisms for modeling nonadiabatic transitions between pure quantum basis states ("surface hops") and for calculating the loss of quantum-mechanical phase information (e.g., the decay of the off-diagonal elements of the density matrix). In our view, however, both processes should be unified in a single description of decoherence. In this paper, we start from the density matrix of the total system and use the frozen Gaussian approximation for the nuclear wave function to derive a nuclear-induced decoherence rate for the electronic degrees of freedom. We then use this decoherence rate as the basis for a new nonadiabatic MQC molecular-dynamics (MD) algorithm, which we call mean-field dynamics with stochastic decoherence (MF-SD). MF-SD begins by evolving the quantum subsystem according to the time-dependent Schrödinger equation, leading to mean-field dynamics. MF-SD then uses the nuclear-induced decoherence rate to determine stochastically at each time step whether the system remains in a coherent mixed state or decoheres. Once it is determined that the system should decohere, the quantum subsystem undergoes an instantaneous total wave-function collapse onto one of the adiabatic basis states and the classical velocities are adjusted to conserve energy. Thus, MF-SD combines surface hops and decoherence into a single idea: decoherence in MF-SD does not require the artificial introduction of reference states, auxiliary trajectories, or trajectory swarms, which also makes MF-SD much more computationally efficient than other nonadiabatic MQC MD algorithms. The unified definition of

  6. Automatic classification of endogenous seismic sources within a landslide body using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Provost, Floriane; Hibert, Clément; Malet, Jean-Philippe; Stumpf, André; Doubre, Cécile

    2016-04-01

    Different studies have shown the presence of microseismic activity in soft-rock landslides. The seismic signals exhibit significantly different features in the time and frequency domains which allow their classification and interpretation. Most of the classes could be associated with different mechanisms of deformation occurring within and at the surface (e.g. rockfall, slide-quake, fissure opening, fluid circulation). However, some signals remain not fully understood and some classes contain few examples that prevent any interpretation. To move toward a more complete interpretation of the links between the dynamics of soft-rock landslides and the physical processes controlling their behaviour, a complete catalog of the endogeneous seismicity is needed. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forest classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied to the seismicity recorded at the Super-Sauze landslide between 2013 and 2015. We selected a dozen of seismic signal features that characterize precisely its spectral content (e.g. central frequency, spectrum width, energy in several frequency bands, spectrogram shape, spectrum local and global maxima

  7. Land cover classification using random forest with genetic algorithm-based parameter optimization

    NASA Astrophysics Data System (ADS)

    Ming, Dongping; Zhou, Tianning; Wang, Min; Tan, Tian

    2016-07-01

    Land cover classification based on remote sensing imagery is an important means to monitor, evaluate, and manage land resources. However, it requires robust classification methods that allow accurate mapping of complex land cover categories. Random forest (RF) is a powerful machine-learning classifier that can be used in land remote sensing. However, two important parameters of RF classification, namely, the number of trees and the number of variables tried at each split, affect classification accuracy. Thus, optimal parameter selection is an inevitable problem in RF-based image classification. This study uses the genetic algorithm (GA) to optimize the two parameters of RF to produce optimal land cover classification accuracy. HJ-1B CCD2 image data are used to classify six different land cover categories in Changping, Beijing, China. Experimental results show that GA-RF can avoid arbitrariness in the selection of parameters. The experiments also compare land cover classification results by using GA-RF method, traditional RF method (with default parameters), and support vector machine method. When the GA-RF method is used, classification accuracies, respectively, improved by 1.02% and 6.64%. The comparison results show that GA-RF is a feasible solution for land cover classification without compromising accuracy or incurring excessive time.

  8. Markov stochasticity coordinates

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-01-01

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method-termed Markov Stochasticity Coordinates-is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  9. High-copy bacterial plasmids diffuse in the nucleoid-free space, replicate stochastically and are randomly partitioned at cell division.

    PubMed

    Reyes-Lamothe, Rodrigo; Tran, Tung; Meas, Diane; Lee, Laura; Li, Alice M; Sherratt, David J; Tolmasky, Marcelo E

    2014-01-01

    Bacterial plasmids play important roles in the metabolism, pathogenesis and bacterial evolution and are highly versatile biotechnological tools. Stable inheritance of plasmids depends on their autonomous replication and efficient partition to daughter cells at cell division. Active partition systems have not been identified for high-copy number plasmids, and it has been generally believed that they are partitioned randomly at cell division. Nevertheless, direct evidence for the cellular location of replicating and nonreplicating plasmids, and the partition mechanism has been lacking. We used as model pJHCMW1, a plasmid isolated from Klebsiella pneumoniae that includes two β-lactamase and two aminoglycoside resistance genes. Here we report that individual ColE1-type plasmid molecules are mobile and tend to be excluded from the nucleoid, mainly localizing at the cell poles but occasionally moving between poles along the long axis of the cell. As a consequence, at the moment of cell division, most plasmid molecules are located at the poles, resulting in efficient random partition to the daughter cells. Complete replication of individual molecules occurred stochastically and independently in the nucleoid-free space throughout the cell cycle, with a constant probability of initiation per plasmid.

  10. 3-D Ultrasound Segmentation of the Placenta Using the Random Walker Algorithm: Reliability and Agreement.

    PubMed

    Stevenson, Gordon N; Collins, Sally L; Ding, Jane; Impey, Lawrence; Noble, J Alison

    2015-12-01

    Volumetric segmentation of the placenta using 3-D ultrasound is currently performed clinically to investigate correlation between organ volume and fetal outcome or pathology. Previously, interpolative or semi-automatic contour-based methodologies were used to provide volumetric results. We describe the validation of an original random walker (RW)-based algorithm against manual segmentation and an existing semi-automated method, virtual organ computer-aided analysis (VOCAL), using initialization time, inter- and intra-observer variability of volumetric measurements and quantification accuracy (with respect to manual segmentation) as metrics of success. Both semi-automatic methods require initialization. Therefore, the first experiment compared initialization times. Initialization was timed by one observer using 20 subjects. This revealed significant differences (p < 0.001) in time taken to initialize the VOCAL method compared with the RW method. In the second experiment, 10 subjects were used to analyze intra-/inter-observer variability between two observers. Bland-Altman plots were used to analyze variability combined with intra- and inter-observer variability measured by intra-class correlation coefficients, which were reported for all three methods. Intra-class correlation coefficient values for intra-observer variability were higher for the RW method than for VOCAL, and both were similar to manual segmentation. Inter-observer variability was 0.94 (0.88, 0.97), 0.91 (0.81, 0.95) and 0.80 (0.61, 0.90) for manual, RW and VOCAL, respectively. Finally, a third observer with no prior ultrasound experience was introduced and volumetric differences from manual segmentation were reported. Dice similarity coefficients for observers 1, 2 and 3 were respectively 0.84 ± 0.12, 0.94 ± 0.08 and 0.84 ± 0.11, and the mean was 0.87 ± 0.13. The RW algorithm was found to provide results concordant with those for manual segmentation and to outperform VOCAL in aspects of observer

  11. Stochastic longshore current dynamics

    NASA Astrophysics Data System (ADS)

    Restrepo, Juan M.; Venkataramani, Shankar

    2016-12-01

    We develop a stochastic parametrization, based on a 'simple' deterministic model for the dynamics of steady longshore currents, that produces ensembles that are statistically consistent with field observations of these currents. Unlike deterministic models, stochastic parameterization incorporates randomness and hence can only match the observations in a statistical sense. Unlike statistical emulators, in which the model is tuned to the statistical structure of the observation, stochastic parametrization are not directly tuned to match the statistics of the observations. Rather, stochastic parameterization combines deterministic, i.e physics based models with stochastic models for the "missing physics" to create hybrid models, that are stochastic, but yet can be used for making predictions, especially in the context of data assimilation. We introduce a novel measure of the utility of stochastic models of complex processes, that we call consistency of sensitivity. A model with poor consistency of sensitivity requires a great deal of tuning of parameters and has a very narrow range of realistic parameters leading to outcomes consistent with a reasonable spectrum of physical outcomes. We apply this metric to our stochastic parametrization and show that, the loss of certainty inherent in model due to its stochastic nature is offset by the model's resulting consistency of sensitivity. In particular, the stochastic model still retains the forward sensitivity of the deterministic model and hence respects important structural/physical constraints, yet has a broader range of parameters capable of producing outcomes consistent with the field data used in evaluating the model. This leads to an expanded range of model applicability. We show, in the context of data assimilation, the stochastic parametrization of longshore currents achieves good results in capturing the statistics of observation that were not used in tuning the model.

  12. Stochastic models of solute transport in highly heterogeneous geologic media

    SciTech Connect

    Semenov, V.N.; Korotkin, I.A.; Pruess, K.; Goloviznin, V.M.; Sorokovikova, O.S.

    2009-09-15

    A stochastic model of anomalous diffusion was developed in which transport occurs by random motion of Brownian particles, described by distribution functions of random displacements with heavy (power-law) tails. One variant of an effective algorithm for random function generation with a power-law asymptotic and arbitrary factor of asymmetry is proposed that is based on the Gnedenko-Levy limit theorem and makes it possible to reproduce all known Levy {alpha}-stable fractal processes. A two-dimensional stochastic random walk algorithm has been developed that approximates anomalous diffusion with streamline-dependent and space-dependent parameters. The motivation for introducing such a type of dispersion model is the observed fact that tracers in natural aquifers spread at different super-Fickian rates in different directions. For this and other important cases, stochastic random walk models are the only known way to solve the so-called multiscaling fractional order diffusion equation with space-dependent parameters. Some comparisons of model results and field experiments are presented.

  13. Algorithms for propagating uncertainty across heterogeneous domains

    SciTech Connect

    Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.

    2015-12-30

    We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.

  14. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    PubMed Central

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  15. MODIS 250m burned area mapping based on an algorithm using change point detection and Markov random fields.

    NASA Astrophysics Data System (ADS)

    Mota, Bernardo; Pereira, Jose; Campagnolo, Manuel; Killick, Rebeca

    2013-04-01

    Area burned in tropical savannas of Brazil was mapped using MODIS-AQUA daily 250m resolution imagery by adapting one of the European Space Agency fire_CCI project burned area algorithms, based on change point detection and Markov random fields. The study area covers 1,44 Mkm2 and was performed with data from 2005. The daily 1000 m image quality layer was used for cloud and cloud shadow screening. The algorithm addresses each pixel as a time series and detects changes in the statistical properties of NIR reflectance values, to identify potential burning dates. The first step of the algorithm is robust filtering, to exclude outlier observations, followed by application of the Pruned Exact Linear Time (PELT) change point detection technique. Near-infrared (NIR) spectral reflectance changes between time segments, and post change NIR reflectance values are combined into a fire likelihood score. Change points corresponding to an increase in reflectance are dismissed as potential burn events, as are those occurring outside of a pre-defined fire season. In the last step of the algorithm, monthly burned area probability maps and detection date maps are converted to dichotomous (burned-unburned maps) using Markov random fields, which take into account both spatial and temporal relations in the potential burned area maps. A preliminary assessment of our results is performed by comparison with data from the MODIS 1km active fires and the 500m burned area products, taking into account differences in spatial resolution between the two sensors.

  16. On implementation of EM-type algorithms in the stochastic models for a matrix computing on GPU

    SciTech Connect

    Gorshenin, Andrey K.

    2015-03-10

    The paper discusses the main ideas of an implementation of EM-type algorithms for computing on the graphics processors and the application for the probabilistic models based on the Cox processes. An example of the GPU’s adapted MATLAB source code for the finite normal mixtures with the expectation-maximization matrix formulas is given. The testing of computational efficiency for GPU vs CPU is illustrated for the different sample sizes.

  17. Mutation-Based Artificial Fish Swarm Algorithm for Bound Constrained Global Optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.

    2011-09-01

    The herein presented mutation-based artificial fish swarm (AFS) algorithm includes mutation operators to prevent the algorithm to falling into local solutions, diversifying the search, and to accelerate convergence to the global optima. Three mutation strategies are introduced into the AFS algorithm to define the trial points that emerge from random, leaping and searching behaviors. Computational results show that the new algorithm outperforms other well-known global stochastic solution methods.

  18. Heuristic-biased stochastic sampling

    SciTech Connect

    Bresina, J.L.

    1996-12-31

    This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.

  19. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.

    PubMed

    Dhar, Amrit; Minin, Vladimir N

    2017-02-08

    Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.

  20. Numerical method for the stochastic projected Gross-Pitaevskii equation

    NASA Astrophysics Data System (ADS)

    Rooney, S. J.; Blakie, P. B.; Bradley, A. S.

    2014-01-01

    We present a method for solving the stochastic projected Gross-Pitaevskii equation (SPGPE) for a three-dimensional weakly interacting Bose gas in a harmonic-oscillator trapping potential. The SPGPE contains the challenge of both accurately evolving all modes in the low-energy classical region of the system, and evaluating terms from the number-conserving scattering reservoir process. We give an accurate and efficient procedure for evaluating the scattering terms using a Hermite-polynomial based spectral-Galerkin representation, which allows us to precisely implement the low-energy mode restriction. Stochastic integration is performed using the weak semi-implicit Euler method. We extensively characterize the accuracy of our method, finding a faster-than-expected rate of stochastic convergence. Physical consistency of the algorithm is demonstrated by considering thermalization of initially random states.

  1. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    DOE PAGES

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.; ...

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less

  2. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    SciTech Connect

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.; Castaing, Jeremy

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as a foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.

  3. Final Technical Report: Sparse Grid Scenario Generation and Interior Algorithms for Stochastic Optimization in a Parallel Computing Environment

    SciTech Connect

    Mehrotra, Sanjay

    2016-09-07

    The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting our main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.

  4. Using a stochastic gradient boosting algorithm to analyse the effectiveness of Landsat 8 data for montado land cover mapping: Application in southern Portugal

    NASA Astrophysics Data System (ADS)

    Godinho, Sérgio; Guiomar, Nuno; Gil, Artur

    2016-07-01

    This study aims to develop and propose a methodological approach for montado ecosystem mapping using Landsat 8 multi-spectral data, vegetation indices, and the Stochastic Gradient Boosting (SGB) algorithm. Two Landsat 8 scenes (images from spring and summer 2014) of the same area in southern Portugal were acquired. Six vegetation indices were calculated for each scene: the Enhanced Vegetation Index (EVI), the Short-Wave Infrared Ratio (SWIR32), the Carotenoid Reflectance Index 1 (CRI1), the Green Chlorophyll Index (CIgreen), the Normalised Multi-band Drought Index (NMDI), and the Soil-Adjusted Total Vegetation Index (SATVI). Based on this information, two datasets were prepared: (i) Dataset I only included multi-temporal Landsat 8 spectral bands (LS8), and (ii) Dataset II included the same information as Dataset I plus vegetation indices (LS8 + VIs). The integration of the vegetation indices into the classification scheme resulted in a significant improvement in the accuracy of Dataset II's classifications when compared to Dataset I (McNemar test: Z-value = 4.50), leading to a difference of 4.90% in overall accuracy and 0.06 in the Kappa value. For the montado ecosystem, adding vegetation indices in the classification process showed a relevant increment in producer and user accuracies of 3.64% and 6.26%, respectively. By using the variable importance function from the SGB algorithm, it was found that the six most prominent variables (from a total of 24 tested variables) were the following: EVI_summer; CRI1_spring; SWIR32_spring; B6_summer; B5_summer; and CIgreen_summer.

  5. A Circuit-Based Neural Network with Hybrid Learning of Backpropagation and Random Weight Change Algorithms

    PubMed Central

    Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O.

    2016-01-01

    A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems. PMID:28025566

  6. A Circuit-Based Neural Network with Hybrid Learning of Backpropagation and Random Weight Change Algorithms.

    PubMed

    Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O

    2016-12-23

    A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems.

  7. Multi-Objective Random Search Algorithm for Simultaneously Optimizing Wind Farm Layout and Number of Turbines

    NASA Astrophysics Data System (ADS)

    Feng, Ju; Shen, Wen Zhong; Xu, Chang

    2016-09-01

    A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.

  8. An automated bladder volume measurement algorithm by pixel classification using random forests.

    PubMed

    Annangi, Pavan; Frigstad, Sigmund; Subin, S B; Torp, Anders; Ramasubramaniam, Sundararajan; Varna, Srinivas; Annangi, Pavan; Frigstad, Sigmund; Subin, S B; Torp, Anders; Ramasubramaniam, Sundararajan; Varna, Srinivas; Ramasubramaniam, Sundararajan; Torp, Anders; Varna, Srinivas; Subin, Sb; Annangi, Pavan; Frigstad, Sigmund

    2016-08-01

    Residual bladder volume measurement is a very important marker for patients with urinary retention problems. To be able to monitor patients with these conditions at the bedside by nurses or in an out patient setting by general physicians, hand held ultrasound devices will be extremely useful. However to increase the usage of these devices by non traditional users, automated tools that can aid them in the scanning and measurement process will be of great help. In our paper, we have developed a robust segmentation algorithm to automatically measure bladder volume by segmenting bladder contours from sagittal and transverse ultrasound views using a combination of machine learning and active contour algorithms. The algorithm is tested on 50 unseen images and 23 transverse and longitudinal image pairs and the performance is reported.

  9. GUESS-ing Polygenic Associations with Multiple Phenotypes Using a GPU-Based Evolutionary Stochastic Search Algorithm

    PubMed Central

    Hastie, David I.; Zeller, Tanja; Liquet, Benoit; Newcombe, Paul; Yengo, Loic; Wild, Philipp S.; Schillert, Arne; Ziegler, Andreas; Nielsen, Sune F.; Butterworth, Adam S.; Ho, Weang Kee; Castagné, Raphaële; Munzel, Thomas; Tregouet, David; Falchi, Mario; Cambien, François; Nordestgaard, Børge G.; Fumeron, Fredéric; Tybjærg-Hansen, Anne; Froguel, Philippe; Danesh, John; Petretto, Enrico; Blankenberg, Stefan; Tiret, Laurence; Richardson, Sylvia

    2013-01-01

    Genome-wide association studies (GWAS) yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s)-trait(s) associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS) to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS). Despite the relatively small size of GHS (n = 3,175), when compared with the largest published meta-GWAS (n>100,000), GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify associated variants. This

  10. Peaks and dips in Gaussian random fields: a new algorithm for the shear eigenvalues, and the excursion set theory

    NASA Astrophysics Data System (ADS)

    Rossi, Graziano

    2013-04-01

    We present a new algorithm to sample the constrained eigenvalues of the initial shear field associated with Gaussian statistics, called the `peak/dip excursion-set-based' algorithm, at positions which correspond to peaks or dips of the correlated density field. The computational procedure is based on a new formula which extends Doroshkevich's unconditional distribution for the eigenvalues of the linear tidal field, to account for the fact that haloes and voids may correspond to maxima or minima of the density field. The ability to differentiate between random positions and special points in space around which haloes or voids may form (i.e. peaks/dips), encoded in the new formula and reflected in the algorithm, naturally leads to a straightforward implementation of an excursion set model for peaks and dips in Gaussian random fields - one of the key advantages of this sampling procedure. In addition, it offers novel insights into the statistical description of the cosmic web. As a first physical application, we show how the standard distributions of shear ellipticity and prolateness in triaxial models of structure formation are modified by the constraint. In particular, we provide a new expression for the conditional distribution of shape parameters given the density peak constraint, which generalizes some previous literature work. The formula has important implications for the modelling of non-spherical dark matter halo shapes, in relation to their initial shape distribution. We also test and confirm our theoretical predictions for the individual distributions of eigenvalues subjected to the extremum constraint, along with other directly related conditional probabilities. Finally, we indicate how the proposed sampling procedure naturally integrates into the standard excursion set model, potentially solving some of its well-known problems, and into the ellipsoidal collapse framework. Several other ongoing applications and extensions, towards the development of

  11. A production-inventory model with permissible delay incorporating learning effect in random planning horizon using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Kar, Mohuya B.; Bera, Shankar; Das, Debasis; Kar, Samarjit

    2015-10-01

    This paper presents a production-inventory model for deteriorating items with stock-dependent demand under inflation in a random planning horizon. The supplier offers the retailer fully permissible delay in payment. It is assumed that the time horizon of the business period is random in nature and follows exponential distribution with a known mean. Here learning effect is also introduced for the production cost and setup cost. The model is formulated as profit maximization problem with respect to the retailer and solved with the help of genetic algorithm (GA) and PSO. Moreover, the convergence of two methods—GA and PSO—is studied against generation numbers and it is seen that GA converges rapidly than PSO. The optimum results from methods are compared both numerically and graphically. It is observed that the performance of GA is marginally better than PSO. We have provided some numerical examples and some sensitivity analyses to illustrate the model.

  12. Hierarchical stochastic image grammars for classification and segmentation.

    PubMed

    Wang, Wiley; Pollak, Ilya; Wong, Tak-Shing; Bouman, Charles A; Harper, Mary P; Siskind, Jeffrey M

    2006-10-01

    We develop a new class of hierarchical stochastic image models called spatial random trees (SRTs) which admit polynomial-complexity exact inference algorithms. Our framework of multitree dictionaries is the starting point for this construction. SRTs are stochastic hidden tree models whose leaves are associated with image data. The states at the tree nodes are random variables, and, in addition, the structure of the tree is random and is generated by a probabilistic grammar. We describe an efficient recursive algorithm for obtaining the maximum a posteriori estimate of both the tree structure and the tree states given an image. We also develop an efficient procedure for performing one iteration of the expectation-maximization algorithm and use it to estimate the model parameters from a set of training images. We address other inference problems arising in applications such as maximization of posterior marginals and hypothesis testing. Our models and algorithms are illustrated through several image classification and segmentation experiments, ranging from the segmentation of synthetic images to the classification of natural photographs and the segmentation of scanned documents. In each case, we show that our method substantially improves accuracy over a variety of existing methods.

  13. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  14. Finite-Size Scaling in Random K-SAT Problems

    NASA Astrophysics Data System (ADS)

    Ha, Meesoon; Lee, Sang Hoon; Jeon, Chanil; Jeong, Hawoong

    2010-03-01

    We propose a comprehensive view of threshold behaviors in random K-satisfiability (K-SAT) problems, in the context of the finite-size scaling (FSS) concept of nonequilibrium absorbing phase transitions using the average SAT (ASAT) algorithm. In particular, we focus on the value of the FSS exponent to characterize the SAT/UNSAT phase transition, which is still debatable. We also discuss the role of the noise (temperature-like) parameter in stochastic local heuristic search algorithms.

  15. A generalized stochastic perturbation technique for plasticity problems

    NASA Astrophysics Data System (ADS)

    Kamiński, Marcin Marek

    2010-03-01

    The main aim of this paper is to present an algorithm and the solution to the nonlinear plasticity problems with random parameters. This methodology is based on the finite element method covering physical and geometrical nonlinearities and, on the other hand, on the generalized nth order stochastic perturbation method. The perturbation approach resulting from the Taylor series expansion with uncertain parameters is provided in two different ways: (i) via the straightforward differentiation of the initial incremental equation and (ii) using the modified response surface method. This methodology is illustrated with the analysis of the elasto-plastic plane truss with random Young’s modulus leading to the determination of the probabilistic moments by the hybrid stochastic symbolic-finite element method computations.

  16. Sublinear scaling for time-dependent stochastic density functional theory

    SciTech Connect

    Gao, Yi; Neuhauser, Daniel; Baer, Roi; Rabani, Eran

    2015-01-21

    A stochastic approach to time-dependent density functional theory is developed for computing the absorption cross section and the random phase approximation (RPA) correlation energy. The core idea of the approach involves time-propagation of a small set of stochastic orbitals which are first projected on the occupied space and then propagated in time according to the time-dependent Kohn-Sham equations. The evolving electron density is exactly represented when the number of random orbitals is infinite, but even a small number (≈16) of such orbitals is enough to obtain meaningful results for absorption spectrum and the RPA correlation energy per electron. We implement the approach for silicon nanocrystals using real-space grids and find that the overall scaling of the algorithm is sublinear with computational time and memory.

  17. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  18. Statistical validation of stochastic models

    SciTech Connect

    Hunter, N.F.; Barney, P.; Paez, T.L.; Ferregut, C.; Perez, L.

    1996-12-31

    It is common practice in structural dynamics to develop mathematical models for system behavior, and the authors are now capable of developing stochastic models, i.e., models whose parameters are random variables. Such models have random characteristics that are meant to simulate the randomness in characteristics of experimentally observed systems. This paper suggests a formal statistical procedure for the validation of mathematical models of stochastic systems when data taken during operation of the stochastic system are available. The statistical characteristics of the experimental system are obtained using the bootstrap, a technique for the statistical analysis of non-Gaussian data. The authors propose a procedure to determine whether or not a mathematical model is an acceptable model of a stochastic system with regard to user-specified measures of system behavior. A numerical example is presented to demonstrate the application of the technique.

  19. Stochastic methods for zero energy quantum scattering

    NASA Astrophysics Data System (ADS)

    Koch, Justus H.; Mall, Hubertus R.; Lenz, Stefan

    1998-02-01

    We investigate the use of stochastic methods for zero energy quantum scattering based on a path integral approach. With the application to the scattering of a projectile from a nuclear many-body target in mind, we use the potential scattering of a particle as a test for the accuracy and efficiency of several methods. To be able to deal with complex potentials, we introduce a path sampling action and a modified scattering observable. The approaches considered are the random walk, where the points of a path are sequentially generated, and the Langevin algorithm, which updates an entire path. Several improvements are investigated. A cluster algorithm for dealing with scattering problems is finally proposed, which shows the best accuracy and stability.

  20. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  1. A new logistic dynamic particle swarm optimization algorithm based on random topology.

    PubMed

    Ni, Qingjian; Deng, Jianming

    2013-01-01

    Population topology of particle swarm optimization (PSO) will directly affect the dissemination of optimal information during the evolutionary process and will have a significant impact on the performance of PSO. Classic static population topologies are usually used in PSO, such as fully connected topology, ring topology, star topology, and square topology. In this paper, the performance of PSO with the proposed random topologies is analyzed, and the relationship between population topology and the performance of PSO is also explored from the perspective of graph theory characteristics in population topologies. Further, in a relatively new PSO variant which named logistic dynamic particle optimization, an extensive simulation study is presented to discuss the effectiveness of the random topology and the design strategies of population topology. Finally, the experimental data are analyzed and discussed. And about the design and use of population topology on PSO, some useful conclusions are proposed which can provide a basis for further discussion and research.

  2. The adaptive dynamic community detection algorithm based on the non-homogeneous random walking

    NASA Astrophysics Data System (ADS)

    Xin, Yu; Xie, Zhi-Qiang; Yang, Jing

    2016-05-01

    With the changing of the habit and custom, people's social activity tends to be changeable. It is required to have a community evolution analyzing method to mine the dynamic information in social network. For that, we design the random walking possibility function and the topology gain function to calculate the global influence matrix of the nodes. By the analysis of the global influence matrix, the clustering directions of the nodes can be obtained, thus the NRW (Non-Homogeneous Random Walk) method for detecting the static overlapping communities can be established. We design the ANRW (Adaptive Non-Homogeneous Random Walk) method via adapting the nodes impacted by the dynamic events based on the NRW. The ANRW combines the local community detection with dynamic adaptive adjustment to decrease the computational cost for ANRW. Furthermore, the ANRW treats the node as the calculating unity, thus the running manner of the ANRW is suitable to the parallel computing, which could meet the requirement of large dataset mining. Finally, by the experiment analysis, the efficiency of ANRW on dynamic community detection is verified.

  3. Land cover and land use mapping of the iSimangaliso Wetland Park, South Africa: comparison of oblique and orthogonal random forest algorithms

    NASA Astrophysics Data System (ADS)

    Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad

    2016-01-01

    In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.

  4. Nonlinear optimization for stochastic simulations.

    SciTech Connect

    Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.

    2003-12-01

    This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.

  5. Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne

    2010-02-01

    Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  6. Stochastic games

    PubMed Central

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883

  7. Convergence of stochastic learning in perceptrons with binary synapses.

    PubMed

    Senn, Walter; Fusi, Stefano

    2005-06-01

    The efficacy of a biological synapse is naturally bounded, and at some resolution, and is discrete at the latest level of single vesicles. The finite number of synaptic states dramatically reduce the storage capacity of a network when online learning is considered (i.e., the synapses are immediately modified by each pattern): the trace of old memories decays exponentially with the number of new memories (palimpsest property). Moreover, finding the discrete synaptic strengths which enable the classification of linearly separable patterns is a combinatorially hard problem known to be NP complete. In this paper we show that learning with discrete (binary) synapses is nevertheless possible with high probability if a randomly selected fraction of synapses is modified following each stimulus presentation (slow stochastic learning). As an additional constraint, the synapses are only changed if the output neuron does not give the desired response, as in the case of classical perceptron learning. We prove that for linearly separable classes of patterns the stochastic learning algorithm converges with arbitrary high probability in a finite number of presentations, provided that the number of neurons encoding the patterns is large enough. The stochastic learning algorithm is successfully applied to a standard classification problem of nonlinearly separable patterns by using multiple, stochastically independent output units, with an achieved performance which is comparable to the maximal ones reached for the task.

  8. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  9. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Development and Evaluation of a New Air Exchange Rate Algorithm for the Stochastic Human Exposure and Dose Simulation Model (ISES Presentation)

    EPA Science Inventory

    Previous exposure assessment panel studies have observed considerable seasonal, between-home and between-city variability in residential pollutant infiltration. This is likely a result of differences in home ventilation, or air exchange rates (AER). The Stochastic Human Exposure ...

  11. Fock space, symbolic algebra, and analytical solutions for small stochastic systems

    NASA Astrophysics Data System (ADS)

    Santos, Fernando A. N.; Gadêlha, Hermes; Gaffney, Eamonn A.

    2015-12-01

    Randomness is ubiquitous in nature. From single-molecule biochemical reactions to macroscale biological systems, stochasticity permeates individual interactions and often regulates emergent properties of the system. While such systems are regularly studied from a modeling viewpoint using stochastic simulation algorithms, numerous potential analytical tools can be inherited from statistical and quantum physics, replacing randomness due to quantum fluctuations with low-copy-number stochasticity. Nevertheless, classical studies remained limited to the abstract level, demonstrating a more general applicability and equivalence between systems in physics and biology rather than exploiting the physics tools to study biological systems. Here the Fock space representation, used in quantum mechanics, is combined with the symbolic algebra of creation and annihilation operators to consider explicit solutions for the chemical master equations describing small, well-mixed, biochemical, or biological systems. This is illustrated with an exact solution for a Michaelis-Menten single enzyme interacting with limited substrate, including a consideration of very short time scales, which emphasizes when stiffness is present even for small copy numbers. Furthermore, we present a general matrix representation for Michaelis-Menten kinetics with an arbitrary number of enzymes and substrates that, following diagonalization, leads to the solution of this ubiquitous, nonlinear enzyme kinetics problem. For this, a flexible symbolic maple code is provided, demonstrating the prospective advantages of this framework compared to stochastic simulation algorithms. This further highlights the possibilities for analytically based studies of stochastic systems in biology and chemistry using tools from theoretical quantum physics.

  12. Fock space, symbolic algebra, and analytical solutions for small stochastic systems.

    PubMed

    Santos, Fernando A N; Gadêlha, Hermes; Gaffney, Eamonn A

    2015-12-01

    Randomness is ubiquitous in nature. From single-molecule biochemical reactions to macroscale biological systems, stochasticity permeates individual interactions and often regulates emergent properties of the system. While such systems are regularly studied from a modeling viewpoint using stochastic simulation algorithms, numerous potential analytical tools can be inherited from statistical and quantum physics, replacing randomness due to quantum fluctuations with low-copy-number stochasticity. Nevertheless, classical studies remained limited to the abstract level, demonstrating a more general applicability and equivalence between systems in physics and biology rather than exploiting the physics tools to study biological systems. Here the Fock space representation, used in quantum mechanics, is combined with the symbolic algebra of creation and annihilation operators to consider explicit solutions for the chemical master equations describing small, well-mixed, biochemical, or biological systems. This is illustrated with an exact solution for a Michaelis-Menten single enzyme interacting with limited substrate, including a consideration of very short time scales, which emphasizes when stiffness is present even for small copy numbers. Furthermore, we present a general matrix representation for Michaelis-Menten kinetics with an arbitrary number of enzymes and substrates that, following diagonalization, leads to the solution of this ubiquitous, nonlinear enzyme kinetics problem. For this, a flexible symbolic maple code is provided, demonstrating the prospective advantages of this framework compared to stochastic simulation algorithms. This further highlights the possibilities for analytically based studies of stochastic systems in biology and chemistry using tools from theoretical quantum physics.

  13. Stochastic Models of Polymer Systems

    DTIC Science & Technology

    2016-01-01

    algorithms for big data applications . (2) We studied stochastic dynamics of polymer systems in the mean field limit. (3) We studied noisy Hegselmann-Krause...DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS 7. PERFORMING ORGANIZATION NAMES AND ADDRESSES 15. SUBJECT TERMS b. ABSTRACT 2. REPORT TYPE 17. LIMITATION...Distribution Unlimited Final Report: Stochastic Models of Polymer Systems The views, opinions and/or findings contained in this report are those of the

  14. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  15. Efficient asymmetric image authentication schemes based on photon counting-double random phase encoding and RSA algorithms.

    PubMed

    Moon, Inkyu; Yi, Faliu; Han, Mingu; Lee, Jieun

    2016-06-01

    Recently, double random phase encoding (DRPE) has been integrated with the photon counting (PC) imaging technique for the purpose of secure image authentication. In this scheme, the same key should be securely distributed and shared between the sender and receiver, but this is one of the most vexing problems of symmetric cryptosystems. In this study, we propose an efficient asymmetric image authentication scheme by combining the PC-DRPE and RSA algorithms, which solves key management and distribution problems. The retrieved image from the proposed authentication method contains photon-limited encrypted data obtained by means of PC-DRPE. Therefore, the original image can be protected while the retrieved image can be efficiently verified using a statistical nonlinear correlation approach. Experimental results demonstrate the feasibility of our proposed asymmetric image authentication method.

  16. Automatic segmentation of ground-glass opacities in lung CT images by using Markov random field-based algorithms.

    PubMed

    Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo

    2012-06-01

    Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.

  17. Building a genetic risk model for bipolar disorder from genome-wide association data with random forest algorithm

    PubMed Central

    Chuang, Li-Chung; Kuo, Po-Hsiu

    2017-01-01

    A genetic risk score could be beneficial in assisting clinical diagnosis for complex diseases with high heritability. With large-scale genome-wide association (GWA) data, the current study constructed a genetic risk model with a machine learning approach for bipolar disorder (BPD). The GWA dataset of BPD from the Genetic Association Information Network was used as the training data for model construction, and the Systematic Treatment Enhancement Program (STEP) GWA data were used as the validation dataset. A random forest algorithm was applied for pre-filtered markers, and variable importance indices were assessed. 289 candidate markers were selected by random forest procedures with good discriminability; the area under the receiver operating characteristic curve was 0.944 (0.935–0.953) in the training set and 0.702 (0.681–0.723) in the STEP dataset. Using a score with the cutoff of 184, the sensitivity and specificity for BPD was 0.777 and 0.854, respectively. Pathway analyses revealed important biological pathways for identified genes. In conclusion, the present study identified informative genetic markers to differentiate BPD from healthy controls with acceptable discriminability in the validation dataset. In the future, diagnosis classification can be further improved by assessing more comprehensive clinical risk factors and jointly analysing them with genetic data in large samples. PMID:28045094

  18. Stochastic approximation methods-Powerful tools for simulation and optimization: A survey of some recent work on multi-agent systems and cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Yin, George; Wang, Le Yi; Zhang, Hongwei

    2014-12-01

    Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.

  19. Stochastic approximation methods-Powerful tools for simulation and optimization: A survey of some recent work on multi-agent systems and cyber-physical systems

    SciTech Connect

    Yin, George; Wang, Le Yi; Zhang, Hongwei

    2014-12-10

    Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.

  20. Network motif identification in stochastic networks

    NASA Astrophysics Data System (ADS)

    Jiang, Rui; Tu, Zhidong; Chen, Ting; Sun, Fengzhu

    2006-06-01

    Network motifs have been identified in a wide range of networks across many scientific disciplines and are suggested to be the basic building blocks of most complex networks. Nonetheless, many networks come with intrinsic and/or experimental uncertainties and should be treated as stochastic networks. The building blocks in these networks thus may also have stochastic properties. In this article, we study stochastic network motifs derived from families of mutually similar but not necessarily identical patterns of interconnections. We establish a finite mixture model for stochastic networks and develop an expectation-maximization algorithm for identifying stochastic network motifs. We apply this approach to the transcriptional regulatory networks of Escherichia coli and Saccharomyces cerevisiae, as well as the protein-protein interaction networks of seven species, and identify several stochastic network motifs that are consistent with current biological knowledge. expectation-maximization algorithm | mixture model | transcriptional regulatory network | protein-protein interaction network

  1. The bi-objective stochastic covering tour problem.

    PubMed

    Tricoire, Fabien; Graf, Alexandra; Gutjahr, Walter J

    2012-07-01

    We formulate a bi-objective covering tour model with stochastic demand where the two objectives are given by (i) cost (opening cost for distribution centers plus routing cost for a fleet of vehicles) and (ii) expected uncovered demand. In the model, it is assumed that depending on the distance, a certain percentage of clients go from their homes to the nearest distribution center. An application in humanitarian logistics is envisaged. For the computational solution of the resulting bi-objective two-stage stochastic program with recourse, a branch-and-cut technique, applied to a sample-average version of the problem obtained from a fixed random sample of demand vectors, is used within an epsilon-constraint algorithm. Computational results on real-world data for rural communities in Senegal show the viability of the approach.

  2. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  3. Stochastic Vorticity and Associated Filtering Theory

    SciTech Connect

    Amirdjanova, A.; Kallianpur, G.

    2002-12-19

    The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.

  4. Stochastic Linear Quadratic Optimal Control Problems

    SciTech Connect

    Chen, S.; Yong, J.

    2001-07-01

    This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward-backward stochastic differential equations are established. Some results involving Riccati equation are discussed as well.

  5. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  6. A fast and memory-sparing probabilistic selection algorithm for the GPU

    SciTech Connect

    Monroe, Laura M; Wendelberger, Joanne; Michalak, Sarah

    2010-09-29

    A fast and memory-sparing probabilistic top-N selection algorithm is implemented on the GPU. This probabilistic algorithm gives a deterministic result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces both the memory requirements and the average time required for the algorithm. This algorithm is well-suited to more general parallel processors with multiple layers of memory hierarchy. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be especially useful for processors having a limited amount of fast memory available.

  7. A non-stochastic iterative computational method to model light propagation in turbid media

    NASA Astrophysics Data System (ADS)

    McIntyre, Thomas J.; Zemp, Roger J.

    2015-03-01

    Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.

  8. Prediction of protein-protein interactions using chaos game representation and wavelet transform via the random forest algorithm.

    PubMed

    Jia, J H; Liu, Z; Chen, X; Xiao, X; Liu, B X

    2015-10-02

    Studying the network of protein-protein interactions (PPIs) will provide valuable insights into the inner workings of cells. It is vitally important to develop an automated, high-throughput tool that efficiently predicts protein-protein interactions. This study proposes a new model for PPI prediction based on the concept of chaos game representation and the wavelet transform, which means that a considerable amount of sequence-order effects can be incorporated into a set of discrete numbers. The advantage of using chaos game representation and the wavelet transform to formulate the protein sequence is that it can more effectively reflect its overall sequence-order characteristics than the conventional correlation factors. Using such a formulation frame to represent the protein sequences means that the random forest algorithm can be used to conduct the prediction. The results for a large-scale independent test dataset show that the proposed model can achieve an excellent performance with an accuracy value of about 0.86 and a geometry mean value of about 0.85. The model is therefore a useful supplementary tool for PPI predictions. The predictor used in this article is freely available at http://www.jci-bioinfo.cn/PPI.

  9. Comparison between WorldView-2 and SPOT-5 images in mapping the bracken fern using the random forest algorithm

    NASA Astrophysics Data System (ADS)

    Odindi, John; Adam, Elhadi; Ngubane, Zinhle; Mutanga, Onisimo; Slotow, Rob

    2014-01-01

    Plant species invasion is known to be a major threat to socioeconomic and ecological systems. Due to high cost and limited extents of urban green spaces, high mapping accuracy is necessary to optimize the management of such spaces. We compare the performance of the new-generation WorldView-2 (WV-2) and SPOT-5 images in mapping the bracken fern [Pteridium aquilinum (L) kuhn] in a conserved urban landscape. Using the random forest algorithm, grid-search approaches based on out-of-bag estimate error were used to determine the optimal ntree and mtry combinations. The variable importance and backward feature elimination techniques were further used to determine the influence of the image bands on mapping accuracy. Additionally, the value of the commonly used vegetation indices in enhancing the classification accuracy was tested on the better performing image data. Results show that the performance of the new WV-2 bands was better than that of the traditional bands. Overall classification accuracies of 84.72 and 72.22% were achieved for the WV-2 and SPOT images, respectively. Use of selected indices from the WV-2 bands increased the overall classification accuracy to 91.67%. The findings in this study show the suitability of the new generation in mapping the bracken fern within the often vulnerable urban natural vegetation cover types.

  10. Stochastic Microlensing: Mathematical Theory and Applications

    NASA Astrophysics Data System (ADS)

    Teguia, Alberto Mokak

    Stochastic microlensing is a central tool in probing dark matter on galactic scales. From first principles, we initiate the development of a mathematical theory of stochastic microlensing. We first construct a natural probability space for stochastic microlensing and characterize the general behaviour of the random time delay functions' random critical sets. Next we study stochastic microlensing in two distinct random microlensing scenarios: The uniform stars' distribution with constant mass spectrum and the spatial stars' distribution with general mass spectrum. For each scenario, we determine exact and asymptotic (in the large number of point masses limit) stochastic properties of the random time delay functions and associated random lensing maps and random shear tensors, including their moments and asymptotic density functions. We use these results to study certain random observables, such as random fixed lensed images, random bending angles, and random magnifications. These results are relevant to the theory of random fields and provide a platform for further generalizations as well as analytical limits for checking astrophysical studies of stochastic microlensing. Continuing our development of a mathematical theory of stochastic microlensing, we study the stochastic version of the Image Counting Problem, first considered in the non-random setting by Einstein and generalized by Petters. In particular, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images for a general random lensing scenario. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to the uniform stars' distribution random microlensing scenario, we calculate the asymptotic global

  11. Stochastic damage evolution in textile laminates

    NASA Technical Reports Server (NTRS)

    Dzenis, Yuris A.; Bogdanovich, Alexander E.; Pastore, Christopher M.

    1993-01-01

    A probabilistic model utilizing random material characteristics to predict damage evolution in textile laminates is presented. Model is based on a division of each ply into two sublaminas consisting of cells. The probability of cell failure is calculated using stochastic function theory and maximal strain failure criterion. Three modes of failure, i.e. fiber breakage, matrix failure in transverse direction, as well as matrix or interface shear cracking, are taken into account. Computed failure probabilities are utilized in reducing cell stiffness based on the mesovolume concept. A numerical algorithm is developed predicting the damage evolution and deformation history of textile laminates. Effect of scatter of fiber orientation on cell properties is discussed. Weave influence on damage accumulation is illustrated with the help of an example of a Kevlar/epoxy laminate.

  12. Universality in numerical computations with random data.

    PubMed

    Deift, Percy A; Menon, Govind; Olver, Sheehan; Trogdon, Thomas

    2014-10-21

    The authors present evidence for universality in numerical computations with random data. Given a (possibly stochastic) numerical algorithm with random input data, the time (or number of iterations) to convergence (within a given tolerance) is a random variable, called the halting time. Two-component universality is observed for the fluctuations of the halting time--i.e., the histogram for the halting times, centered by the sample average and scaled by the sample variance, collapses to a universal curve, independent of the input data distribution, as the dimension increases. Thus, up to two components--the sample average and the sample variance--the statistics for the halting time are universally prescribed. The case studies include six standard numerical algorithms as well as a model of neural computation and decision-making. A link to relevant software is provided for readers who would like to do computations of their own.

  13. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  14. New "Tau-Leap" Strategy for Accelerated Stochastic Simulation.

    PubMed

    Ramkrishna, Doraiswami; Shu, Che-Chi; Tran, Vu

    2014-12-10

    The "Tau-Leap" strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev's inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev's inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. (J. Chem. Phys.2006, 124, 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys.2004, 121, 10356; Chatterjee et al. J. Chem. Phys.2005, 122, 024112; Peng et al. J. Chem. Phys.2007, 126, 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys.2001, 115, 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance.

  15. Derivation of Randomized Algorithms.

    DTIC Science & Technology

    1985-10-01

    INSTRUCTIONSREPOT DCUMNTATON AGEBEFORE COMPLETING FORM 2.1T ACCESO NO I.RCIPIENT’S CATALOG NUMBER TILE(adSutile5. TYPE OF REPORT & PERIOD COVERED DERIVATION OF... multiple comparisons between keys are allowed on each step. Thus a comparison tree machine with p processors is allowed a maximum of p comparisons at...be generated from a single original RAM by execution of a fork operation. This model, known as PRAM, allows multiple concurrent reads but prohibits

  16. Epidemiologic programs for computers and calculators. Simple algorithms for the representation of deterministic and stochastic versions of the Reed-Frost epidemic model using a programmable calculator.

    PubMed

    Franco, E L; Simons, A R

    1986-05-01

    Two programs are described for the emulation of the dynamics of Reed-Frost progressive epidemics in a handheld programmable calculator (HP-41C series). The programs provide a complete record of cases, susceptibles, and immunes at each epidemic period using either the deterministic formulation or the trough analogue of the mechanical model for the stochastic version. Both programs can compute epidemics that include a constant rate of influx or outflux of susceptibles and single or double infectivity time periods.

  17. Stochastic thermodynamics

    NASA Astrophysics Data System (ADS)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  18. Discontinuity Detection in Multivariate Space for Stochastic Simulations

    SciTech Connect

    Archibald, Richard K; Gelb, Anne; Saxena, Rishu; Xiu, Dongbin

    2009-01-01

    Edge detection has traditionally been associated with detecting physical space jump discontinuities in one dimension, e.g. seismic signals, and two dimensions, e.g. digital images. Hence most of the research on edge detection algorithms is restricted to these contexts. High dimension edge detection can be of significant importance, however. For instance, stochastic variants of classical differential equations not only have variables in space/time dimensions, but additional dimensions are often introduced to the problem by the nature of the random inputs. The stochastic solutions to such problems sometimes contain discontinuities in the corresponding random space and a prior knowledge of jump locations can be very helpful in increasing the accuracy of the final solution. Traditional edge detection methods typically require uniform grid point distribution. They also often involve the computation of gradients and/or Laplacians, which can become very complicated to compute as the number of dimensions increases. The polynomial annihilation edge detection method, on the other hand, is more flexible in terms of its geometric specifications and is furthermore relatively easy to apply. This paper discusses the numerical implementation of the polynomial annihilation edge detection method to high dimensional functions that arise when solving stochastic partial differential equations.

  19. Stochastic model of the residual acceleration environment in microgravity

    NASA Technical Reports Server (NTRS)

    Vinals, Jorge

    1994-01-01

    We describe a theoretical investigation of the effects that stochastic residual accelerations (g-jitter) onboard spacecraft can have on experiments conducted in a microgravity environment. We first introduce a stochastic model of the residual acceleration field, and develop a numerical algorithm to solve the equations governing fluid flow that allow for a stochastic body force. We next summarize our studies of two generic situations: stochastic parametric resonance and the onset of convective flow induced by a fluctuating acceleration field.

  20. Stochastic models for human gene mapping

    SciTech Connect

    Goradia, T.M.

    1992-01-01

    This thesis examines a variety of gene mapping experiments and recommends, on the basis of stochastic and combinatorial analysis, improved experimental designs. Somatic cell hybrid panels can localize genes to particular chromosomes or chromosomal regions. Although the redundancy within randomly generated panels may be beneficial, probability calculations reveal their inefficiency. Equally good panels with far fewer clones can be constructed by choosing clones from pre-existing collection of clones. The method of simulated annealing is suggested for judiciously selecting small, informative panels from larger existing collections of clones. A more difficult exercise is mapping a gene relative to syntenic genes on the basis of genetic distance. Traditional methods of pedigree analysis are able to accomplish this to a great extent. Automatic genotype elimination algorithms for a single locus play a central role in making likelihood computations on human pedigree data feasible. A simple algorithm that is fully efficient in pedigrees without loops is presented. This algorithm can be easily coded and is instrumental in reducing computing times for pedigree analysis. Alternative methods are needed for high-resolution gene mapping. Three-locus sperm typing and its implications for the estimation of recombination fractions and for locus ordering are examined. Comparisons are made among some sequential stopping rules for three-locus order assignment. Poissonization and other stochastic methods are used for approximating the mean stopping times and error probabilities. A trisection strategy for ordering a new locus relative to an existing set of loci is proposed. When used in conjunction with Bayesian methods, this trisection strategy has attractive optimality properties. The genetic distance between the [sup G][gamma] globin locus and the parathyroid hormone locus is verified by a probabilistic model that accounts for the major sources of laboratory error.

  1. Handling packet dropouts and random delays for unstable delayed processes in NCS by optimal tuning of PIλDμ controllers with evolutionary algorithms.

    PubMed

    Pan, Indranil; Das, Saptarshi; Gupta, Amitava

    2011-10-01

    The issues of stochastically varying network delays and packet dropouts in Networked Control System (NCS) applications have been simultaneously addressed by time domain optimal tuning of fractional order (FO) PID controllers. Different variants of evolutionary algorithms are used for the tuning process and their performances are compared. Also the effectiveness of the fractional order PI(λ)D(μ) controllers over their integer order counterparts is looked into. Two standard test bench plants with time delay and unstable poles which are encountered in process control applications are tuned with the proposed method to establish the validity of the tuning methodology. The proposed tuning methodology is independent of the specific choice of plant and is also applicable for less complicated systems. Thus it is useful in a wide variety of scenarios. The paper also shows the superiority of FOPID controllers over their conventional PID counterparts for NCS applications.

  2. Time-ordered product expansions for computational stochastic system biology.

    PubMed

    Mjolsness, Eric

    2013-06-01

    The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.

  3. Stochastic learning via optimizing the variational inequalities.

    PubMed

    Tao, Qing; Gao, Qian-Kun; Chu, De-Jun; Wu, Gao-Wei

    2014-10-01

    A wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the learning speed. To this end, we avoid the hard concept of regret in online learning and directly discuss the stochastic learning algorithms. We first cast the regularized learning problem as a VI. Then, we present a stochastic version of alternating direction method of multipliers (ADMMs) to solve the induced VI. We define a new VI-criterion to measure the convergence of stochastic algorithms. While the rate of convergence for any iterative algorithms to solve nonsmooth convex optimization problems cannot be better than O(1/√t), the proposed stochastic ADMM (SADMM) is proved to have an O(1/t) VI-convergence rate for the l1-regularized hinge loss problems without strong convexity and smoothness. The derived VI-convergence results also support the viewpoint that the standard online analysis is too loose to analyze the stochastic setting properly. The experiments demonstrate that SADMM has almost the same performance as the state-of-the-art stochastic learning algorithms but its O(1/t) VI-convergence rate is capable of tightly characterizing the real learning speed.

  4. Stochastic thermodynamics of resetting

    NASA Astrophysics Data System (ADS)

    Fuchs, Jaco; Goldt, Sebastian; Seifert, Udo

    2016-03-01

    Stochastic dynamics with random resetting leads to a non-equilibrium steady state. Here, we consider the thermodynamics of resetting by deriving the first and second law for resetting processes far from equilibrium. We identify the contributions to the entropy production of the system which arise due to resetting and show that they correspond to the rate with which information is either erased or created. Using Landauer's principle, we derive a bound on the amount of work that is required to maintain a resetting process. We discuss different regimes of resetting, including a Maxwell demon scenario where heat is extracted from a bath at constant temperature.

  5. Bayesian Estimation and Inference Using Stochastic Electronics

    PubMed Central

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326

  6. Bayesian Estimation and Inference Using Stochastic Electronics.

    PubMed

    Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André

    2016-01-01

    In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.

  7. Fluctuating currents in stochastic thermodynamics. II. Energy conversion and nonequilibrium response in kinesin models

    NASA Astrophysics Data System (ADS)

    Altaner, Bernhard; Wachtel, Artur; Vollmer, Jürgen

    2015-10-01

    Unlike macroscopic engines, the molecular machinery of living cells is strongly affected by fluctuations. Stochastic thermodynamics uses Markovian jump processes to model the random transitions between the chemical and configurational states of these biological macromolecules. A recently developed theoretical framework [A. Wachtel, J. Vollmer, and B. Altaner, Phys. Rev. E 92, 042132 (2015), 10.1103/PhysRevE.92.042132] provides a simple algorithm for the determination of macroscopic currents and correlation integrals of arbitrary fluctuating currents. Here we use it to discuss energy conversion and nonequilibrium response in different models for the molecular motor kinesin. Methodologically, our results demonstrate the effectiveness of the algorithm in dealing with parameter-dependent stochastic models. For the concrete biophysical problem our results reveal two interesting features in experimentally accessible parameter regions: the validity of a nonequilibrium Green-Kubo relation at mechanical stalling as well as a negative differential mobility for superstalling forces.

  8. Analysis of stochastic effects in Kaldor-type business cycle discrete model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Sysolyatina, Anna

    2016-07-01

    We study nonlinear stochastic phenomena in the discrete Kaldor model of business cycles. A numerical parametric analysis of stochastically forced attractors (equilibria, closed invariant curves, discrete cycles) of this model is performed using the stochastic sensitivity functions technique. A spatial arrangement of random states in stochastic attractors is modeled by confidence domains. The phenomenon of noise-induced transitions "chaos-order" is discussed.

  9. A Simple Stochastic Model for Generating Broken Cloud Optical Depth and Top Height Fields

    NASA Technical Reports Server (NTRS)

    Prigarin, Sergei M.; Marshak, Alexander

    2007-01-01

    A simple and fast algorithm for generating two correlated stochastic twodimensional (2D) cloud fields is described. The algorithm is illustrated with two broken cumulus cloud fields: cloud optical depth and cloud top height retrieved from Moderate Resolution Imaging Spectrometer (MODIS). Only two 2D fields are required as an input. The algorithm output is statistical realizations of these two fields with approximately the same correlation and joint distribution functions as the original ones. The major assumption of the algorithm is statistical isotropy of the fields. In contrast to fractals and the Fourier filtering methods frequently used for stochastic cloud modeling, the proposed method is based on spectral models of homogeneous random fields. For keeping the same probability density function as the (first) original field, the method of inverse distribution function is used. When the spatial distribution of the first field has been generated, a realization of the correlated second field is simulated using a conditional distribution matrix. This paper is served as a theoretical justification to the publicly available software that has been recently released by the authors and can be freely downloaded from http://i3rc.gsfc.nasa.gov/Public codes clouds.htm. Though 2D rather than full 3D, stochastic realizations of two correlated cloud fields that mimic statistics of given fields have proved to be very useful to study 3D radiative transfer features of broken cumulus clouds for better understanding of shortwave radiation and interpretation of the remote sensing retrievals.

  10. QB1 - Stochastic Gene Regulation

    SciTech Connect

    Munsky, Brian

    2012-07-23

    Summaries of this presentation are: (1) Stochastic fluctuations or 'noise' is present in the cell - Random motion and competition between reactants, Low copy, quantization of reactants, Upstream processes; (2) Fluctuations may be very important - Cell-to-cell variability, Cell fate decisions (switches), Signal amplification or damping, stochastic resonances; and (3) Some tools are available to mode these - Kinetic Monte Carlo simulations (SSA and variants), Moment approximation methods, Finite State Projection. We will see how modeling these reactions can tell us more about the underlying processes of gene regulation.

  11. Meta-analysis of randomized controlled trials reveals an improved clinical outcome of using genotype plus clinical algorithm for warfarin dosing.

    PubMed

    Liao, Zhenqi; Feng, Shaoguang; Ling, Peng; Zhang, Guoqing

    2015-02-01

    Previous studies have raised interest in using the genotyping of CYP2C9 and VKORC1 to guide warfarin dosing. However, there is lack of solid evidence to prove that genotype plus clinical algorithm provides improved clinical outcomes than the single clinical algorithm. The results of recent reported clinical trials are paradoxical and needs to be systematically evaluated. In this study, we aim to assess whether genotype plus clinical algorithm of warfarin is superior to the single clinical algorithm through a meta-analysis of randomized controlled trials (RCTs). All relevant studies from PubMed and reference lists from Jan 1, 1995 to Jan 13, 2014 were extracted and screened. Eligible studies included randomized trials that compared clinical plus pharmacogenetic algorithms group to single clinical algorithm group using adult (≥ 18 years) patients with disease conditions that require warfarin use. We further used fix-effect models to calculate the mean difference or the risk ratios (RRs) and 95% CIs to analyze the extracted data. The statistical heterogeneity was calculated using I(2). The percentage of time within the therapeutic INR range was considered to be the primary clinical outcome. The initial search strategy identified 50 citations and 7 trials were eligible. These seven trials included 1,910 participants, including 960 patients who received genotype plus clinical algorithm of warfarin dosing and 950 patients who received clinical algorithm only. We discovered that the percentage of time within the therapeutic INR range of the genotype-guided group was improved compared with the standard group in the RCTs when the initial standard dose was fixed (95% CI 0.09-0.40; I(2) = 47.8%). However, for the studies using non-fixed initial doses, the genotype-guided group failed to exhibit statistically significant outcome compared to the standard group. No significant difference was observed in the incidences of adverse events (RR 0.94, 95% CI 0.84-1.04; I(2) = 0%, p

  12. Principal axes for stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Vasconcelos, V. V.; Raischel, F.; Haase, M.; Peinke, J.; Wächter, M.; Lind, P. G.; Kleinhans, D.

    2011-09-01

    We introduce a general procedure for directly ascertaining how many independent stochastic sources exist in a complex system modeled through a set of coupled Langevin equations of arbitrary dimension. The procedure is based on the computation of the eigenvalues and the corresponding eigenvectors of local diffusion matrices. We demonstrate our algorithm by applying it to two examples of systems showing Hopf bifurcation. We argue that computing the eigenvectors associated to the eigenvalues of the diffusion matrix at local mesh points in the phase space enables one to define vector fields of stochastic eigendirections. In particular, the eigenvector associated to the lowest eigenvalue defines the path of minimum stochastic forcing in phase space, and a transform to a new coordinate system aligned with the eigenvectors can increase the predictability of the system.

  13. Multidimensional stochastic approximation using locally contractive functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  14. Simple stochastic simulation.

    PubMed

    Schilstra, Maria J; Martin, Stephen R

    2009-01-01

    Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.

  15. Intrinsic optimization using stochastic nanomagnets

    NASA Astrophysics Data System (ADS)

    Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo

    2017-03-01

    This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets.

  16. Intrinsic optimization using stochastic nanomagnets

    PubMed Central

    Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo

    2017-01-01

    This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets. PMID:28295053

  17. Competitive Facility Location with Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2009-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.

  18. The Stochastic Gradient Approximation: An application to lithium nanoclusters

    NASA Astrophysics Data System (ADS)

    Nissenbaum, Daniel

    The Stochastic Gradient Approximation (SGA) is the natural extension of Quantum Monte Carlo (QMC) methods to the variational optimization of quantum wave function parameters. While many deterministic applications impose stochasticity, the SGA fruitfully takes advantage of the natural stochasticity already present in QMC in order to utilize a small number of QMC samples and approach the minimum more quickly by averaging out the random noise in the samples. The increasing efficiency of the method for systems with larger numbers of particles, and its nearly ideal scaling when running on parallelized processors, is evidence that the SGA is well suited for the study of nanoclusters. In this thesis, I discuss the SGA algorithm in detail. I also describe its application to both quantum dots, and to the Resonating Valence Bond wave function (RVB). The RVB is a sophisticated model of electronic systems that captures electronic correlation effects directly and that improves the nodal structure of quantum wave functions. The RVB is receiving renewed attention in the study of nanoclusters due to the fact that calculations of RVB wave functions have become feasible with recent advances in computer hardware and software.

  19. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  20. Accelerating Pseudo-Random Number Generator for MCNP on GPU

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu

    2010-09-01

    Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.

  1. Stochastic modeling of short-term exposure close to an air pollution source in a naturally ventilated room: an autocorrelated random walk method.

    PubMed

    Cheng, Kai-Chung; Acevedo-Bolton, Viviana; Jiang, Ruo-Ting; Klepeis, Neil E; Ott, Wayne R; Kitanidis, Peter K; Hildemann, Lynn M

    2014-01-01

    For an actively emitting source such as cooking or smoking, indoor measurements have shown a strong "proximity effect" within 1 m. The significant increase in both the magnitude and variation of concentration near a source is attributable to transient high peaks that occur sporadically-and these "microplumes" cause great uncertainty in estimating personal exposure. Recent field studies in naturally ventilated rooms show that close-proximity concentrations are approximately lognormally distributed. We use the autocorrelated random walk method to represent the time-varying directionality of indoor emissions, thereby predicting the time series and frequency distributions of concentrations close to an actively emitting point source. The predicted 5-min concentrations show good agreement with measurements from a point source of CO in a naturally ventilated house-the measured and predicted frequency distributions at 0.5- and 1-m distances are similar and approximately lognormal over a concentration range spanning three orders of magnitude. By including the transient peak concentrations, this random airflow modeling method offers a way to more accurately assess acute exposure levels for cases where well-defined airflow patterns in an indoor space are not available.

  2. Output Feedback Stabilization for a Class of Multi-Variable Bilinear Stochastic Systems with Stochastic Coupling Attenuation

    SciTech Connect

    Zhang, Qichun; Zhou, Jinglin; Wang, Hong; Chai, Tianyou

    2016-01-01

    In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.

  3. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  4. Stochastic Wireless Channel Modeling, Estimation and Identification from Measurements

    SciTech Connect

    Olama, Mohammed M; Djouadi, Seddik M; Li, Yanyan

    2008-07-01

    This paper is concerned with stochastic modeling of wireless fading channels, parameter estimation, and system identification from measurement data. Wireless channels are represented by stochastic state-space form, whose parameters and state variables are estimated using the expectation maximization algorithm and Kalman filtering, respectively. The latter are carried out solely from received signal measurements. These algorithms estimate the channel inphase and quadrature components and identify the channel parameters recursively. The proposed algorithm is tested using measurement data, and the results are presented.

  5. Stochastic Cooling

    SciTech Connect

    Blaskiewicz, M.

    2011-01-01

    Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.

  6. Detecting Character Dependencies in Stochastic Models of Evolution.

    PubMed

    Chakrabarty, Deeparnab; Kannan, Sampath; Tian, Kevin

    2016-03-01

    Stochastic models of biological evolution generally assume that different characters (runs of the stochastic process) are independent and identically distributed. In this article we determine the asymptotic complexity of detecting dependence for some fairly general models of evolution, but simple models of dependence. A key difference from much of the previous work is that our algorithms work without knowledge of the tree topology. Specifically, we consider various stochastic models of evolution ranging from the common ones used by biologists (such as Cavender-Farris-Neyman and Jukes-Cantor models) to very general ones where evolution of different characters can be governed by different transition matrices on each edge of the evolutionary tree (phylogeny). We also consider several models of dependence between two characters. In the most specific model, on each edge of the phylogeny the joint distribution of the dependent characters undergoes a perturbation of a fixed magnitude, in a fixed direction from what it would be if the characters were evolving independently. More general dependence models don't require such a strong "signal." Instead they only require that on each edge, the perturbation of the joint distribution has a significant component in a specific direction. Our main results are nearly tight bounds on the induced or operator norm of the transition matrices that would allow us to detect dependence efficiently for most models of evolution and dependence that we consider. We make essential use of a new concentration result for multistate random variables of a Markov random field on arbitrary trivalent trees: We show that the random variable counting the number of leaves in any particular state has variance that is subquadratic in the number of leaves.

  7. Competitive Facility Location with Fuzzy Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2010-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops, with uncertainty and vagueness including demands for the facilities in a plane. By representing the demands for facilities as fuzzy random variables, the location problem can be formulated as a fuzzy random programming problem. For solving the fuzzy random programming problem, first the α-level sets for fuzzy numbers are used for transforming it to a stochastic programming problem, and secondly, by using their expectations and variances, it can be reformulated to a deterministic programming problem. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic oscillation. The efficiency of the proposed method is shown by applying it to numerical examples of the facility location problems.

  8. Optimisation of simulations of stochastic processes by removal of opposing reactions

    NASA Astrophysics Data System (ADS)

    Spill, Fabian; Maini, Philip K.; Byrne, Helen M.

    2016-02-01

    Models invoking the chemical master equation are used in many areas of science, and, hence, their simulation is of interest to many researchers. The complexity of the problems at hand often requires considerable computational power, so a large number of algorithms have been developed to speed up simulations. However, a drawback of many of these algorithms is that their implementation is more complicated than, for instance, the Gillespie algorithm, which is widely used to simulate the chemical master equation, and can be implemented with a few lines of code. Here, we present an algorithm which does not modify the way in which the master equation is solved, but instead modifies the transition rates. It works for all models in which reversible reactions occur by replacing such reversible reactions with effective net reactions. Examples of such systems include reaction-diffusion systems, in which diffusion is modelled by a random walk. The random movement of particles between neighbouring sites is then replaced with a net random flux. Furthermore, as we modify the transition rates of the model, rather than its implementation on a computer, our method can be combined with existing algorithms that were designed to speed up simulations of the stochastic master equation. By focusing on some specific models, we show how our algorithm can significantly speed up model simulations while maintaining essential features of the original model.

  9. Cubic-scaling algorithm and self-consistent field for the random-phase approximation with second-order screened exchange.

    PubMed

    Moussa, Jonathan E

    2014-01-07

    The random-phase approximation with second-order screened exchange (RPA+SOSEX) is a model of electron correlation energy with two caveats: its accuracy depends on an arbitrary choice of mean field, and it scales as O(n(5)) operations and O(n(3)) memory for n electrons. We derive a new algorithm that reduces its scaling to O(n(3)) operations and O(n(2)) memory using controlled approximations and a new self-consistent field that approximates Brueckner coupled-cluster doubles theory with RPA+SOSEX, referred to as Brueckner RPA theory. The algorithm comparably reduces the scaling of second-order Mo̸ller-Plesset perturbation theory with smaller cost prefactors than RPA+SOSEX. Within a semiempirical model, we study H2 dissociation to test accuracy and Hn rings to verify scaling.

  10. Automated Flight Routing Using Stochastic Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  11. Stochastic uncertainty analysis for solute transport in randomly heterogeneous media using a Karhunen-Loève-based moment equation approach

    USGS Publications Warehouse

    Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao

    2007-01-01

    A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen-Loève-based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen-Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three-Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two-dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.

  12. Multiple Stochastic Point Processes in Gene Expression

    NASA Astrophysics Data System (ADS)

    Murugan, Rajamanickam

    2008-04-01

    We generalize the idea of multiple-stochasticity in chemical reaction systems to gene expression. Using Chemical Langevin Equation approach we investigate how this multiple-stochasticity can influence the overall molecular number fluctuations. We show that the main sources of this multiple-stochasticity in gene expression could be the randomness in transcription and translation initiation times which in turn originates from the underlying bio-macromolecular recognition processes such as the site-specific DNA-protein interactions and therefore can be internally regulated by the supra-molecular structural factors such as the condensation/super-coiling of DNA. Our theory predicts that (1) in case of gene expression system, the variances ( φ) introduced by the randomness in transcription and translation initiation-times approximately scales with the degree of condensation ( s) of DNA or mRNA as φ ∝ s -6. From the theoretical analysis of the Fano factor as well as coefficient of variation associated with the protein number fluctuations we predict that (2) unlike the singly-stochastic case where the Fano factor has been shown to be a monotonous function of translation rate, in case of multiple-stochastic gene expression the Fano factor is a turn over function with a definite minimum. This in turn suggests that the multiple-stochastic processes can also be well tuned to behave like a singly-stochastic point processes by adjusting the rate parameters.

  13. Some stochastic aspects of intergranular creep cavitation

    SciTech Connect

    Fariborz, S.J.; Farris, J.P.; Harlow, D.G.; Delph, T.J.

    1987-10-01

    We present some results obtained from a simplified stochastic model of intergranular creep cavitation. The probabilistic features of the model arise from the inclusion of random cavity placement on the grain boundary and time-discrete stochastic cavity nucleation. Among the predictions of the model are Weibull-distributed creep rupture failure times and a Weibull distribution of cavity radii. Both of these predictions have qualitative experimental support. 18 refs., 7 figs.

  14. Structural factoring approach for analyzing stochastic networks

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  15. A multilevel stochastic collocation method for SPDEs

    SciTech Connect

    Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton

    2015-03-10

    We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.

  16. Influence of atmospherically induced random wave fronts on diffraction imagery - A computer simulation model for testing image reconstruction algorithms

    NASA Technical Reports Server (NTRS)

    Barakat, Richard; Beletic, James W.

    1990-01-01

    This paper is devoted to the development of a two-dimensional computer-simulation model that is based on the rigid constraints of optical diffraction theory with careful attention paid to the generation of sample realizations of Gaussian-distributed, spatially random, isotropic wave fronts that have zero-mean and prescribed-covariance functions. Given a sample realization of the wave front, the corresponding centered point-spread function and optical-transfer function are evaluated. A detailed study is made of the statistics of random wave-front tilt, point-spread function, modulus squared of transfer function, and phase of transfer function.

  17. Connecting the dots: Semi-analytical and random walk numerical solutions of the diffusion–reaction equation with stochastic initial conditions

    SciTech Connect

    Paster, Amir; Bolster, Diogo; Benson, David A.

    2014-04-15

    We study a system with bimolecular irreversible kinetic reaction A+B→∅ where the underlying transport of reactants is governed by diffusion, and the local reaction term is given by the law of mass action. We consider the case where the initial concentrations are given in terms of an average and a white noise perturbation. Our goal is to solve the diffusion–reaction equation which governs the system, and we tackle it with both analytical and numerical approaches. To obtain an analytical solution, we develop the equations of moments and solve them approximately. To obtain a numerical solution, we develop a grid-less Monte Carlo particle tracking approach, where diffusion is modeled by a random walk of the particles, and reaction is modeled by annihilation of particles. The probability of annihilation is derived analytically from the particles' co-location probability. We rigorously derive the relationship between the initial number of particles in the system and the amplitude of white noise represented by that number. This enables us to compare the particle simulations and the approximate analytical solution and offer an explanation of the late time discrepancies. - Graphical abstract:.

  18. Efficient stochastic simulation of biochemical reactions with noise and delays

    NASA Astrophysics Data System (ADS)

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-02-01

    The stochastic simulation algorithm has been used to generate exact trajectories of biochemical reaction networks. For each simulation step, the simulation selects a reaction and its firing time according to a probability that is proportional to the reaction propensity. We investigate in this paper new efficient formulations of the stochastic simulation algorithm to improve its computational efficiency. We examine the selection of the next reaction firing and reduce its computational cost by reusing the computation in the previous step. For biochemical reactions with delays, we present a new method for computing the firing time of the next reaction. The principle for computing the firing time of our approach is based on recycling of random numbers. Our new approach for generating the firing time of the next reaction is not only computationally efficient but also easy to implement. We further analyze and reduce the number of propensity updates when a delayed reaction occurred. We demonstrate the applicability of our improvements by experimenting with concrete biological models.

  19. Aesthetic considerations in algorithmic and generative composition

    NASA Astrophysics Data System (ADS)

    Hagan, Kerry L.

    Models of chance operations, random equations, stochastic processes, and chaos systems have inspired composers as historical as Wolfgang Amadeus Mozart. As these models advance and new processes are discovered or defined, composers continue to find new inspirations for musical composition. Yet, the relative artistic merits of some of these works are limited. This paper explores the application of extra-musical processes to the sonic arts and proposes aesthetic considerations from the point of view of the artist. Musical examples demonstrate possibilities for working successfully with algorithmic and generative processes in sound, from formal decisions to synthesis.

  20. Joint inversion of marine seismic AVA and CSEM data using statistical rock-physics models and Markov random fields: Stochastic inversion of AVA and CSEM data

    SciTech Connect

    Chen, J.; Hoversten, G.M.

    2011-09-15

    Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy to derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.

  1. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  2. Classical analogs of quasifree quantum stochastic processes given by stochastic states of the quantized electromagnetic field

    NASA Astrophysics Data System (ADS)

    Hertfelder, C.; Kümmerer, B.

    2001-03-01

    The mathematical model describing a light beam prepared in an arbitrary quantum optical state is a quasifree quantum stochastic process on the C* algebra of the canonical commutatation relations. For such quantum stochastic processes the concept of stochastic states is introduced. Stochastic quantum states have a classical analog in the following sense: If the light beam is prepared in a stochastic state, one can construct a generalized classical stochastic process, such that the distributions of the quantum observables and the classical random variables coincide. A sufficient algebraic condition for the stochasticity of a quantum state is formulated. The introduced formalism generalizes the Wigner representation from a single field mode to a continuum of modes. For the special case of a single field mode the stochasticity condition provides a new criterion for the positivity of the Wigner function related to the given state. As an example the quantized eletromagnetic field in empty space at temperature T=0 is discussed. It turns out that the corresponding classical stochastic process is not a white noise but a colored noise with a linearly increasing spectrum.

  3. Stochastic inverse problems: Models and metrics

    SciTech Connect

    Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.

    2015-03-31

    In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.

  4. Numerical methods for the stochastic Landau-Lifshitz Navier-Stokes equations.

    PubMed

    Bell, John B; Garcia, Alejandro L; Williams, Sarah A

    2007-07-01

    The Landau-Lifshitz Navier-Stokes (LLNS) equations incorporate thermal fluctuations into macroscopic hydrodynamics by using stochastic fluxes. This paper examines explicit Eulerian discretizations of the full LLNS equations. Several computational fluid dynamics approaches are considered (including MacCormack's two-step Lax-Wendroff scheme and the piecewise parabolic method) and are found to give good results for the variance of momentum fluctuations. However, neither of these schemes accurately reproduces the fluctuations in energy or density. We introduce a conservative centered scheme with a third-order Runge-Kutta temporal integrator that does accurately produce fluctuations in density, energy, and momentum. A variety of numerical tests, including the random walk of a standing shock wave, are considered and results from the stochastic LLNS solver are compared with theory, when available, and with molecular simulations using a direct simulation Monte Carlo algorithm.

  5. A probabilistic graphical model approach to stochastic multiscale partial differential equations

    SciTech Connect

    Wan, Jiang; Zabaras, Nicholas

    2013-10-01

    We develop a probabilistic graphical model based methodology to efficiently perform uncertainty quantification in the presence of both stochastic input and multiple scales. Both the stochastic input and model responses are treated as random variables in this framework. Their relationships are modeled by graphical models which give explicit factorization of a high-dimensional joint probability distribution. The hyperparameters in the probabilistic model are learned using sequential Monte Carlo (SMC) method, which is superior to standard Markov chain Monte Carlo (MCMC) methods for multi-modal distributions. Finally, we make predictions from the probabilistic graphical model using the belief propagation algorithm. Numerical examples are presented to show the accuracy and efficiency of the predictive capability of the developed graphical model.

  6. Peculiarities of stochastic regime of Arctic ice cover time evolution over 1987-2014 from microwave satellite sounding on the basis of NASA team 2 algorithm

    NASA Astrophysics Data System (ADS)

    Raev, M. D.; Sharkov, E. A.; Tikhonov, V. V.; Repina, I. A.; Komarova, N. Yu.

    2015-12-01

    The GLOBAL-RT database (DB) is composed of long-term radio heat multichannel observation data received from DMSP F08-F17 satellites; it is permanently supplemented with new data on the Earth's exploration from the space department of the Space Research Institute, Russian Academy of Sciences. Arctic ice-cover areas for regions higher than 60° N latitude were calculated using the DB polar version and NASA Team 2 algorithm, which is widely used in foreign scientific literature. According to the analysis of variability of Arctic ice cover during 1987-2014, 2 months were selected when the Arctic ice cover was maximal (February) and minimal (September), and the average ice cover area was calculated for these months. Confidence intervals of the average values are in the 95-98% limits. Several approximations are derived for the time dependences of the ice-cover maximum and minimum over the period under study. Regression dependences were calculated for polynomials from the first degree (linear) to sextic. It was ascertained that the minimal root-mean-square error of deviation from the approximated curve sharply decreased for the biquadratic polynomial and then varied insignificantly: from 0.5593 for the polynomial of third degree to 0.4560 for the biquadratic polynomial. Hence, the commonly used strictly linear regression with a negative time gradient for the September Arctic ice cover minimum over 30 years should be considered incorrect.

  7. Dynamic Response Analysis of Fuzzy Stochastic Truss Structures under Fuzzy Stochastic Excitation

    NASA Astrophysics Data System (ADS)

    Ma, Juan; Chen, Jian-Jun; Gao, Wei

    2006-08-01

    A novel method (Fuzzy factor method) is presented, which is used in the dynamic response analysis of fuzzy stochastic truss structures under fuzzy stochastic step loads. Considering the fuzzy randomness of structural physical parameters, geometric dimensions and the amplitudes of step loads simultaneously, fuzzy stochastic dynamic response of the truss structures is developed using the mode superposition method and fuzzy factor method. The fuzzy numerical characteristics of dynamic response are then obtained by using the random variable’s moment method and the algebra synthesis method. The influences of the fuzzy randomness of structural physical parameters, geometric dimensions and step load on the fuzzy randomness of the dynamic response are demonstrated via an engineering example, and Monte-Carlo method is used to simulate this example, verifying the feasibility and validity of the modeling and method given in this paper.

  8. Estimation on the influence of uncertain parameters on stochastic thermal regime of embankment in permafrost regions

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhao, Xiaodong; Chen, Xing

    2017-03-01

    For embankments in permafrost regions, the soil properties and the upper boundary conditions are stochastic because of complex geological processes and changeable atmospheric environment. These stochastic parameters lead to the fact that conventional deterministic temperature field of embankment become stochastic. In order to estimate the influence of stochastic parameters on random temperature field for embankment in permafrost regions, a series of simulated tests are conducted in this study. We consider the soil properties as random fields and the upper boundary conditions as stochastic processes. Taking the variability of each stochastic parameter into account individually or concurrently, the corresponding random temperature fields are investigated by Neumann stochastic finite element method. The results show that both of the standard deviation under the embankment and the boundary increase with time when considering the stochastic effect of soil properties and boundary conditions. Stochastic boundary conditions and soil properties play a different role in random temperature field of embankment at different times. Each stochastic parameter has a different effect on random temperature field. These results can improve our understanding of the influence of stochastic parameters on random temperature field for embankment in permafrost regions.

  9. Mechanical Autonomous Stochastic Heat Engine.

    PubMed

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.

  10. Mechanical Autonomous Stochastic Heat Engine

    NASA Astrophysics Data System (ADS)

    Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara

    2016-07-01

    Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.

  11. Multiscale stochastic simulations of chemical reactions with regulated scale separation

    NASA Astrophysics Data System (ADS)

    Koumoutsakos, Petros; Feigelman, Justin

    2013-07-01

    We present a coupling of multiscale frameworks with accelerated stochastic simulation algorithms for systems of chemical reactions with disparate propensities. The algorithms regulate the propensities of the fast and slow reactions of the system, using alternating micro and macro sub-steps simulated with accelerated algorithms such as τ and R-leaping. The proposed algorithms are shown to provide significant speedups in simulations of stiff systems of chemical reactions with a trade-off in accuracy as controlled by a regulating parameter. More importantly, the error of the methods exhibits a cutoff phenomenon that allows for optimal parameter choices. Numerical experiments demonstrate that hybrid algorithms involving accelerated stochastic simulations can be, in certain cases, more accurate while faster, than their corresponding stochastic simulation algorithm counterparts.

  12. Maximum caliber inference and the stochastic Ising model.

    PubMed

    Cafaro, Carlo; Ali, Sean Alan

    2016-11-01

    We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.

  13. Maximum caliber inference and the stochastic Ising model

    NASA Astrophysics Data System (ADS)

    Cafaro, Carlo; Ali, Sean Alan

    2016-11-01

    We investigate the maximum caliber variational principle as an inference algorithm used to predict dynamical properties of complex nonequilibrium, stationary, statistical systems in the presence of incomplete information. Specifically, we maximize the path entropy over discrete time step trajectories subject to normalization, stationarity, and detailed balance constraints together with a path-dependent dynamical information constraint reflecting a given average global behavior of the complex system. A general expression for the transition probability values associated with the stationary random Markov processes describing the nonequilibrium stationary system is computed. By virtue of our analysis, we uncover that a convenient choice of the dynamical information constraint together with a perturbative asymptotic expansion with respect to its corresponding Lagrange multiplier of the general expression for the transition probability leads to a formal overlap with the well-known Glauber hyperbolic tangent rule for the transition probability for the stochastic Ising model in the limit of very high temperatures of the heat reservoir.

  14. Stochastic cooling in RHIC

    SciTech Connect

    Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.

    2009-05-04

    After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.

  15. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  16. A termination criterion for parameter estimation in stochastic models in systems biology.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2015-11-01

    Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model.

  17. Stochastic approximation boosting for incomplete data problems.

    PubMed

    Sexton, Joseph; Laake, Petter

    2009-12-01

    Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.

  18. Stochastic many-body perturbation theory for anharmonic molecular vibrations

    SciTech Connect

    Hermes, Matthew R.; Hirata, So

    2014-08-28

    A new quantum Monte Carlo (QMC) method for anharmonic vibrational zero-point energies and transition frequencies is developed, which combines the diagrammatic vibrational many-body perturbation theory based on the Dyson equation with Monte Carlo integration. The infinite sums of the diagrammatic and thus size-consistent first- and second-order anharmonic corrections to the energy and self-energy are expressed as sums of a few m- or 2m-dimensional integrals of wave functions and a potential energy surface (PES) (m is the vibrational degrees of freedom). Each of these integrals is computed as the integrand (including the value of the PES) divided by the value of a judiciously chosen weight function evaluated on demand at geometries distributed randomly but according to the weight function via the Metropolis algorithm. In this way, the method completely avoids cumbersome evaluation and storage of high-order force constants necessary in the original formulation of the vibrational perturbation theory; it furthermore allows even higher-order force constants essentially up to an infinite order to be taken into account in a scalable, memory-efficient algorithm. The diagrammatic contributions to the frequency-dependent self-energies that are stochastically evaluated at discrete frequencies can be reliably interpolated, allowing the self-consistent solutions to the Dyson equation to be obtained. This method, therefore, can compute directly and stochastically the transition frequencies of fundamentals and overtones as well as their relative intensities as pole strengths, without fixed-node errors that plague some QMC. It is shown that, for an identical PES, the new method reproduces the correct deterministic values of the energies and frequencies within a few cm{sup −1} and pole strengths within a few thousandths. With the values of a PES evaluated on the fly at random geometries, the new method captures a noticeably greater proportion of anharmonic effects.

  19. Adaptive Control of Nonlinear and Stochastic Systems

    DTIC Science & Technology

    1991-01-14

    Hernmndez-Lerma and S.I. Marcus, Nonparametric adaptive control of dis- crete time partially observable stochastic systems, Journal of Mathematical Analysis and Applications 137... Journal of Mathematical Analysis and Applications 137 (1989), 485-514. [19] A. Arapostathis and S.I. Marcus, Analysis of an identification algorithm

  20. On Stochastic Comparison of Random Vectors.

    DTIC Science & Technology

    1985-04-01

    2 That is (1.) and (G.) satisfy Condition (2.1) of Theorem 2.1 and there- foreX >t "’ Now combining (3.4) with the above result one obtains...Corollary 3.5 in component cannibali- zation. Consider a collection of heat sources each cooled by its own cooling system consisting of a set of n...identical pumps and a circulation system [composed of radiators, pipes etc.]. The operation of the heat source is continued unless either the heat

  1. Symmetrical Hierarchical Stochastic Searching on the Line in Informative and Deceptive Environments.

    PubMed

    Zhang, Junqi; Wang, Yuheng; Wang, Cheng; Zhou, MengChu

    2016-03-08

    A stochastic point location (SPL) problem aims to find a target parameter on a 1-D line by operating a controlled random walk and receiving information from a stochastic environment (SE). If the target parameter changes randomly, we call the parameter dynamic; otherwise static. SE can be 1) informative (p > 0.5 where p represents the probability for an environment providing a correct suggestion) and 2) deceptive (p < 0.5). Up till now, hierarchical stochastic searching on the line (HSSL) is the most efficient algorithms to catch static or dynamic parameter in an informative environment, but unable to locate the target parameter in a deceptive environment and to recognize an environment's type (informative or deceptive). This paper presents a novel solution, named symmetrical HSSL, by extending an HSSL binary tree-based search structure to a symmetrical form. By means of this innovative way, the proposed learning mechanism is able to converge to a static or dynamic target parameter in the range of not only 0.618¹ < p < 1, but also 0 < p < 0.382. Finally, the experimental results show that our scheme is efficient and feasible to solve the SPL problem in any SE.

  2. Least expected time paths in stochastic, time-varying transportation networks

    SciTech Connect

    Miller-Hooks, E.D.; Mahmassani, H.S.

    1999-06-01

    The authors consider stochastic, time-varying transportation networks, where the arc weights (arc travel times) are random variables with probability distribution functions that vary with time. Efficient procedures are widely available for determining least time paths in deterministic networks. In stochastic but time-invariant networks, least expected time paths can be determined by setting each random arc weight to its expected value and solving an equivalent deterministic problem. This paper addresses the problem of determining least expected time paths in stochastic, time-varying networks. Two procedures are presented. The first procedure determines the a priori least expected time paths from all origins to a single destination for each departure time in the peak period. The second procedure determines lower bounds on the expected times of these a priori least expected time paths. This procedure determines an exact solution for the problem where the driver is permitted to react to revealed travel times on traveled links en route, i.e. in a time-adaptive route choice framework. Modifications to each of these procedures for determining least expected cost (where cost is not necessarily travel time) paths and lower bounds on the expected costs of these paths are given. Extensive numerical tests are conducted to illustrate the algorithms` computational performance as well as the properties of the solution.

  3. Solving the problem of negative populations in approximate accelerated stochastic simulations using the representative reaction approach.

    PubMed

    Kadam, Shantanu; Vanka, Kumar

    2013-02-15

    Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations.

  4. Stochastic symmetries of Wick type stochastic ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Ünal, Gazanfer

    2015-04-01

    We consider Wick type stochastic ordinary differential equations with Gaussian white noise. We define the stochastic symmetry transformations and Lie equations in Kondratiev space (S)-1N. We derive the determining system of Wick type stochastic partial differential equations with Gaussian white noise. Stochastic symmetries for stochastic Bernoulli, Riccati and general stochastic linear equation in (S)-1N are obtained. A stochastic version of canonical variables is also introduced.

  5. Computational singular perturbation analysis of stochastic chemical systems with stiffness

    NASA Astrophysics Data System (ADS)

    Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.

    2017-04-01

    Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.

  6. Identification of non-coding RNAs with a new composite feature in the Hybrid Random Forest Ensemble algorithm

    PubMed Central

    Lertampaiporn, Supatcha; Thammarongtham, Chinae; Nukoolkit, Chakarida; Kaewkamnerdpong, Boonserm; Ruengjitchatchawalya, Marasri

    2014-01-01

    To identify non-coding RNA (ncRNA) signals within genomic regions, a classification tool was developed based on a hybrid random forest (RF) with a logistic regression model to efficiently discriminate short ncRNA sequences as well as long complex ncRNA sequences. This RF-based classifier was trained on a well-balanced dataset with a discriminative set of features and achieved an accuracy, sensitivity and specificity of 92.11%, 90.7% and 93.5%, respectively. The selected feature set includes a new proposed feature, SCORE. This feature is generated based on a logistic regression function that combines five significant features—structure, sequence, modularity, structural robustness and coding potential—to enable improved characterization of long ncRNA (lncRNA) elements. The use of SCORE improved the performance of the RF-based classifier in the identification of Rfam lncRNA families. A genome-wide ncRNA classification framework was applied to a wide variety of organisms, with an emphasis on those of economic, social, public health, environmental and agricultural significance, such as various bacteria genomes, the Arthrospira (Spirulina) genome, and rice and human genomic regions. Our framework was able to identify known ncRNAs with sensitivities of greater than 90% and 77.7% for prokaryotic and eukaryotic sequences, respectively. Our classifier is available at http://ncrna-pred.com/HLRF.htm. PMID:24771344

  7. Analysis of stochastically forced quasi-periodic attractors

    SciTech Connect

    Ryashko, Lev

    2015-11-30

    A problem of the analysis of stochastically forced quasi-periodic auto-oscillations of nonlinear dynamic systems is considered. A stationary distribution of random trajectories in the neighborhood of the corresponding deterministic attractor (torus) is studied. A parametric description of quadratic approximation of the quasipotential based on the stochastic sensitivity functions (SSF) technique is given. Using this technique, we analyse a dispersion of stochastic flows near the torus. For the case of two-torus in three-dimensional space, the stochastic sensitivity function is constructed.

  8. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the

  9. Investments in random environments

    NASA Astrophysics Data System (ADS)

    Navarro-Barrientos, Jesús Emeterio; Cantero-Álvarez, Rubén; Matias Rodrigues, João F.; Schweitzer, Frank

    2008-03-01

    We present analytical investigations of a multiplicative stochastic process that models a simple investor dynamics in a random environment. The dynamics of the investor's budget, x(t) , depends on the stochasticity of the return on investment, r(t) , for which different model assumptions are discussed. The fat-tail distribution of the budget is investigated and compared with theoretical predictions. We are mainly interested in the most probable value xmp of the budget that reaches a constant value over time. Based on an analytical investigation of the dynamics, we are able to predict xmpstat . We find a scaling law that relates the most probable value to the characteristic parameters describing the stochastic process. Our analytical results are confirmed by stochastic computer simulations that show a very good agreement with the predictions.

  10. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    NASA Astrophysics Data System (ADS)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  11. Stochastic solution to quantum dynamics

    NASA Technical Reports Server (NTRS)

    John, Sarah; Wilson, John W.

    1994-01-01

    The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.

  12. Resolution analysis by random probing

    NASA Astrophysics Data System (ADS)

    Simutė, S.; Fichtner, A.; van Leeuwen, T.

    2015-12-01

    We develop and apply methods for resolution analysis in tomography, based on stochastic probing of the Hessian or resolution operators. Key properties of our methods are (i) low algorithmic complexity and easy implementation, (ii) applicability to any tomographic technique, including full-waveform inversion and linearized ray tomography, (iii) applicability in any spatial dimension and to inversions with a large number of model parameters, (iv) low computational costs that are mostly a fraction of those required for synthetic recovery tests, and (v) the ability to quantify both spatial resolution and inter-parameter trade-offs. Using synthetic full-waveform inversions as benchmarks, we demonstrate that auto-correlations of random-model applications to the Hessian yield various resolution measures, including direction- and position-dependent resolution lengths, and the strength of inter-parameter mappings. We observe that the required number of random test models is around 5 in one, two and three dimensions. This means that the proposed resolution analyses are not only more meaningful than recovery tests but also computationally less expensive. We demonstrate the applicability of our method in 3D real-data full-waveform inversions for the western Mediterranean and Japan. In addition to tomographic problems, resolution analysis by random probing may be used in other inverse methods that constrain continuously distributed properties, including electromagnetic and potential-field inversions, as well as recently emerging geodynamic data assimilation.

  13. ecode - Electron Transport Algorithm Testing v. 1.0

    SciTech Connect

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene; Laub, Thomas W.; Crawford, Martin J; Kenseck, Ronald P.; Prinja, Anil

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochastic Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.

  14. Stochastic Differential Games with Asymmetric Information

    SciTech Connect

    Cardaliaguet, Pierre Rainer, Catherine

    2009-02-15

    We investigate a two-player zero-sum stochastic differential game in which the players have an asymmetric information on the random payoff. We prove that the game has a value and characterize this value in terms of dual viscosity solutions of some second order Hamilton-Jacobi equation.

  15. Time Series, Stochastic Processes and Completeness of Quantum Theory

    NASA Astrophysics Data System (ADS)

    Kupczynski, Marian

    2011-03-01

    Most of physical experiments are usually described as repeated measurements of some random variables. Experimental data registered by on-line computers form time series of outcomes. The frequencies of different outcomes are compared with the probabilities provided by the algorithms of quantum theory (QT). In spite of statistical predictions of QT a claim was made that it provided the most complete description of the data and of the underlying physical phenomena. This claim could be easily rejected if some fine structures, averaged out in the standard descriptive statistical analysis, were found in time series of experimental data. To search for these structures one has to use more subtle statistical tools which were developed to study time series produced by various stochastic processes. In this talk we review some of these tools. As an example we show how the standard descriptive statistical analysis of the data is unable to reveal a fine structure in a simulated sample of AR (2) stochastic process. We emphasize once again that the violation of Bell inequalities gives no information on the completeness or the non locality of QT. The appropriate way to test the completeness of quantum theory is to search for fine structures in time series of the experimental data by means of the purity tests or by studying the autocorrelation and partial autocorrelation functions.

  16. Method to describe stochastic dynamics using an optimal coordinate.

    PubMed

    Krivov, Sergei V

    2013-12-01

    A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function.

  17. Multi-objective reliability-based optimization with stochastic metamodels.

    PubMed

    Coelho, Rajan Filomeno; Bouillard, Philippe

    2011-01-01

    This paper addresses continuous optimization problems with multiple objectives and parameter uncertainty defined by probability distributions. First, a reliability-based formulation is proposed, defining the nondeterministic Pareto set as the minimal solutions such that user-defined probabilities of nondominance and constraint satisfaction are guaranteed. The formulation can be incorporated with minor modifications in a multiobjective evolutionary algorithm (here: the nondominated sorting genetic algorithm-II). Then, in the perspective of applying the method to large-scale structural engineering problems--for which the computational effort devoted to the optimization algorithm itself is negligible in comparison with the simulation--the second part of the study is concerned with the need to reduce the number of function evaluations while avoiding modification of the simulation code. Therefore, nonintrusive stochastic metamodels are developed in two steps. First, for a given sampling of the deterministic variables, a preliminary decomposition of the random responses (objectives and constraints) is performed through polynomial chaos expansion (PCE), allowing a representation of the responses by a limited set of coefficients. Then, a metamodel is carried out by kriging interpolation of the PCE coefficients with respect to the deterministic variables. The method has been tested successfully on seven analytical test cases and on the 10-bar truss benchmark, demonstrating the potential of the proposed approach to provide reliability-based Pareto solutions at a reasonable computational cost.

  18. Lagrangian Descriptors for Stochastic Differential Equations: A Tool for Revealing the Phase Portrait of Stochastic Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Balibrea-Iniesta, Francisco; Lopesino, Carlos; Wiggins, Stephen; Mancho, Ana M.

    2016-12-01

    In this paper, we introduce a new technique for depicting the phase portrait of stochastic differential equations. Following previous work for deterministic systems, we represent the phase space by means of a generalization of the method of Lagrangian descriptors to stochastic differential equations. Analogously to the deterministic differential equations setting, the Lagrangian descriptors graphically provide the distinguished trajectories and hyperbolic structures arising within the stochastic dynamics, such as random fixed points and their stable and unstable manifolds. We analyze the sense in which structures form barriers to transport in stochastic systems. We apply the method to several benchmark examples where the deterministic phase space structures are well-understood. In particular, we apply our method to the noisy saddle, the stochastically forced Duffing equation, and the stochastic double gyre model that is a benchmark for analyzing fluid transport.

  19. Stochastic light-cone CTMRG: a new DMRG approach to stochastic models

    NASA Astrophysics Data System (ADS)

    Kemper, A.; Gendiar, A.; Nishino, T.; Schadschneider, A.; Zittartz, J.

    2003-01-01

    We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 105 shows a considerable improvement on the old stochastic TMRG algorithm.

  20. Three-dimensional stochastic vortex flows

    NASA Astrophysics Data System (ADS)

    Esposito, R.; Pulvirenti, M.

    1989-08-01

    It is well known that the dynamics of point vortices approximate, under suitable limits, the two-dimensional Euler flow for an ideal fluid. To find particle models for three-dimensional flows is a more intricate problem. A stochastic version of the algorithm introduced by Beale amd Maida (1982) for simulating the behavior of a three-dimensional Euler flow is introduced here, and convergence to the Navier-Stokes (NS) flow in R exp 3 is shown. The result is based on a stochastic Lagrangian picture of the NS equations.

  1. Turbulence, Spontaneous Stochasticity and Climate

    NASA Astrophysics Data System (ADS)

    Eyink, Gregory

    Turbulence is well-recognized as important in the physics of climate. Turbulent mixing plays a crucial role in the global ocean circulation. Turbulence also provides a natural source of variability, which bedevils our ability to predict climate. I shall review here a recently discovered turbulence phenomenon, called ``spontaneous stochasticity'', which makes classical dynamical systems as intrinsically random as quantum mechanics. Turbulent dissipation and mixing of scalars (passive or active) is now understood to require Lagrangian spontaneous stochasticity, which can be expressed by an exact ``fluctuation-dissipation relation'' for scalar turbulence (joint work with Theo Drivas). Path-integral methods such as developed for quantum mechanics become necessary to the description. There can also be Eulerian spontaneous stochasticity of the flow fields themselves, which is intimately related to the work of Kraichnan and Leith on unpredictability of turbulent flows. This leads to problems similar to those encountered in quantum field theory. To quantify uncertainty in forecasts (or hindcasts), we can borrow from quantum field-theory the concept of ``effective actions'', which characterize climate averages by a variational principle and variances by functional derivatives. I discuss some work with Tom Haine (JHU) and Santha Akella (NASA-Goddard) to make this a practical predictive tool. More ambitious application of the effective action is possible using Rayleigh-Ritz schemes.

  2. Non-random structures in universal compression and the Fermi paradox

    NASA Astrophysics Data System (ADS)

    Gurzadyan, A. V.; Allahverdyan, A. E.

    2016-02-01

    We study the hypothesis of information panspermia assigned recently among possible solutions of the Fermi paradox ("where are the aliens?"). It suggests that the expenses of alien signaling can be significantly reduced, if their messages contained compressed information. To this end we consider universal compression and decoding mechanisms ( e.g. the Lempel-Ziv-Welch algorithm) that can reveal non-random structures in compressed bit strings. The efficiency of the Kolmogorov stochasticity parameter for detection of non-randomness is illustrated, along with the Zipf's law. The universality of these methods, i.e. independence from data details, can be principal in searching for intelligent messages.

  3. Mapping the distributions of C3 and C4 grasses in the mixed-grass prairies of southwest Oklahoma using the Random Forest classification algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Dong; de Beurs, Kirsten M.

    2016-05-01

    The objective of this paper is to demonstrate a new method to map the distributions of C3 and C4 grasses at 30 m resolution and over a 25-year period of time (1988-2013) by combining the Random Forest (RF) classification algorithm and patch stable areas identified using the spatial pattern analysis software FRAGSTATS. Predictor variables for RF classifications consisted of ten spectral variables, four soil edaphic variables and three topographic variables. We provided a confidence score in terms of obtaining pure land cover at each pixel location by retrieving the classification tree votes. Classification accuracy assessments and predictor variable importance evaluations were conducted based on a repeated stratified sampling approach. Results show that patch stable areas obtained from larger patches are more appropriate to be used as sample data pools to train and validate RF classifiers for historical land cover mapping purposes and it is more reasonable to use patch stable areas as sample pools to map land cover in a year closer to the present rather than years further back in time. The percentage of obtained high confidence prediction pixels across the study area ranges from 71.18% in 1988 to 73.48% in 2013. The repeated stratified sampling approach is necessary in terms of reducing the positive bias in the estimated classification accuracy caused by the possible selections of training and validation pixels from the same patch stable areas. The RF classification algorithm was able to identify the important environmental factors affecting the distributions of C3 and C4 grasses in our study area such as elevation, soil pH, soil organic matter and soil texture.

  4. Characterizing stand-level forest canopy cover and height using Landsat time series, samples of airborne LiDAR, and the Random Forest algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.

    2015-03-01

    Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.

  5. Stochastic solution of population balance equations for reactor networks

    SciTech Connect

    Menz, William J.; Akroyd, Jethro; Kraft, Markus

    2014-01-01

    This work presents a sequential modular approach to solve a generic network of reactors with a population balance model using a stochastic numerical method. Full-coupling to the gas-phase is achieved through operator-splitting. The convergence of the stochastic particle algorithm in test networks is evaluated as a function of network size, recycle fraction and numerical parameters. These test cases are used to identify methods through which systematic and statistical error may be reduced, including by use of stochastic weighted algorithms. The optimal algorithm was subsequently used to solve a one-dimensional example of silicon nanoparticle synthesis using a multivariate particle model. This example demonstrated the power of stochastic methods in resolving particle structure by investigating the transient and spatial evolution of primary polydispersity, degree of sintering and TEM-style images. Highlights: •An algorithm is presented to solve reactor networks with a population balance model. •A stochastic method is used to solve the population balance equations. •The convergence and efficiency of the reported algorithms are evaluated. •The algorithm is applied to simulate silicon nanoparticle synthesis in a 1D reactor. •Particle structure is reported as a function of reactor length and time.

  6. Stochastic P-bifurcation and stochastic resonance in a noisy bistable fractional-order system

    NASA Astrophysics Data System (ADS)

    Yang, J. H.; Sanjuán, Miguel A. F.; Liu, H. G.; Litak, G.; Li, X.

    2016-12-01

    We investigate the stochastic response of a noisy bistable fractional-order system when the fractional-order lies in the interval (0, 2]. We focus mainly on the stochastic P-bifurcation and the phenomenon of the stochastic resonance. We compare the generalized Euler algorithm and the predictor-corrector approach which are commonly used for numerical calculations of fractional-order nonlinear equations. Based on the predictor-corrector approach, the stochastic P-bifurcation and the stochastic resonance are investigated. Both the fractional-order value and the noise intensity can induce an stochastic P-bifurcation. The fractional-order may lead the stationary probability density function to turn from a single-peak mode to a double-peak mode. However, the noise intensity may transform the stationary probability density function from a double-peak mode to a single-peak mode. The stochastic resonance is investigated thoroughly, according to the linear and the nonlinear response theory. In the linear response theory, the optimal stochastic resonance may occur when the value of the fractional-order is larger than one. In previous works, the fractional-order is usually limited to the interval (0, 1]. Moreover, the stochastic resonance at the subharmonic frequency and the superharmonic frequency are investigated respectively, by using the nonlinear response theory. When it occurs at the subharmonic frequency, the resonance may be strong and cannot be ignored. When it occurs at the superharmonic frequency, the resonance is weak. We believe that the results in this paper might be useful for the signal processing of nonlinear systems.

  7. Variance-based sensitivity indices for stochastic models with correlated inputs

    SciTech Connect

    Kala, Zdeněk

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  8. Estimating Propensity Parameters Using Google PageRank and Genetic Algorithms.

    PubMed

    Murrugarra, David; Miller, Jacob; Mueller, Alex N

    2016-01-01

    Stochastic Boolean networks, or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. Standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former produces a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. Stochastic Discrete Dynamical Systems (SDDS) are a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution. Then with the use of a genetic algorithm, the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at http://www.ms.uky.edu/~dmu228/GeneticAlg/Code.html.

  9. Estimating Propensity Parameters Using Google PageRank and Genetic Algorithms

    PubMed Central

    Murrugarra, David; Miller, Jacob; Mueller, Alex N.

    2016-01-01

    Stochastic Boolean networks, or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. Standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former produces a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. Stochastic Discrete Dynamical Systems (SDDS) are a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution. Then with the use of a genetic algorithm, the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at http://www.ms.uky.edu/~dmu228/GeneticAlg/Code.html. PMID:27891072

  10. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  11. Stochastic reduced order models for inverse problems under uncertainty.

    PubMed

    Warner, James E; Aquino, Wilkins; Grigoriu, Mircea D

    2015-03-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well.

  12. Random walk particle tracking simulations of non-Fickian transport in heterogeneous media

    SciTech Connect

    Srinivasan, G. Tartakovsky, D.M. Dentz, M. Viswanathan, H.; Berkowitz, B.; Robinson, B.A.

    2010-06-01

    Derivations of continuum nonlocal models of non-Fickian (anomalous) transport require assumptions that might limit their applicability. We present a particle-based algorithm, which obviates the need for many of these assumptions by allowing stochastic processes that represent spatial and temporal random increments to be correlated in space and time, be stationary or non-stationary, and to have arbitrary distributions. The approach treats a particle trajectory as a subordinated stochastic process that is described by a set of Langevin equations, which represent a continuous time random walk (CTRW). Convolution-based particle tracking (CBPT) is used to increase the computational efficiency and accuracy of these particle-based simulations. The combined CTRW-CBPT approach enables one to convert any particle tracking legacy code into a simulator capable of handling non-Fickian transport.

  13. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  14. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks

    PubMed Central

    Rathinam, Muruhan; Sheppard, Patrick W.; Khammash, Mustafa

    2010-01-01

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie’s stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10 000 are demonstrated. PMID:20095724

  15. An importance sampling algorithm for estimating extremes of perpetuity sequences

    NASA Astrophysics Data System (ADS)

    Collamore, Jeffrey F.

    2012-09-01

    In a wide class of problems in insurance and financial mathematics, it is of interest to study the extremal events of a perpetuity sequence. This paper addresses the problem of numerically evaluating these rare event probabilities. Specifically, an importance sampling algorithm is described which is efficient in the sense that it exhibits bounded relative error, and which is optimal in an appropriate asymptotic sense. The main idea of the algorithm is to use a "dual" change of measure, which is employed to an associated Markov chain over a randomly-stopped time interval. The algorithm also makes use of the so-called forward sequences generated to the given stochastic recursion, together with elements of Markov chain theory.

  16. Stochastic Convection Parameterizations

    NASA Technical Reports Server (NTRS)

    Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios

    2012-01-01

    computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts

  17. A Stochastic Employment Problem

    ERIC Educational Resources Information Center

    Wu, Teng

    2013-01-01

    The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…

  18. Unsupervised noise removal algorithms for 3-D confocal fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Roysam, Badrinath; Bhattacharjya, Anoop K.; Srinivas, Chukka; Szarowski, Donald H.; Turner, James N.

    1992-06-01

    Fast algorithms are presented for effective removal of the noise artifact in 3-D confocal fluorescence microscopy images of extended spatial objects such as neurons. The algorithms are unsupervised in the sense that they automatically estimate and adapt to the spatially and temporally varying noise level in the microscopy data. An important feature of the algorithms is the fact that a 3-D segmentation of the field emerges jointly with the intensity estimate. The role of the segmentation is to limit any smoothing to the interiors of regions and hence avoid the blurring that is associated with conventional noise removal algorithms. Fast computation is achieved by parallel computation methods, rather than by algorithmic or modelling compromises. The noise-removal proceeds iteratively, starting from a set of approximate user- supplied, or default initial guesses of the underlying random process parameters. An expectation maximization algorithm is used to obtain a more precise characterization of these parameters, that are then input to a hierarchical estimation algorithm. This algorithm computes a joint solution of the related problems corresponding to intensity estimation, segmentation, and boundary-surface estimation subject to a combination of stochastic priors and syntactic pattern constraints. Three-dimensional stereoscopic renderings of processed 3-D images of murine hippocampal neurons are presented to demonstrate the effectiveness of the method. The processed images exhibit increased contrast and significant smoothing and reduction of the background intensity while avoiding any blurring of the neuronal structures.

  19. A heterogeneous stochastic FEM framework for elliptic PDEs

    SciTech Connect

    Hou, Thomas Y. Liu, Pengfei

    2015-01-15

    We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage.

  20. ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1986-01-01

    A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.

  1. The Traveling Salesman and Related Stochastic Problems

    NASA Astrophysics Data System (ADS)

    Percus, A. G.

    1998-03-01

    In the traveling salesman problem, one must find the length of the shortest closed tour visiting given ``cities''. We study the stochastic version of the problem, taking the locations of cities and the distances separating them to be random variables drawn from an ensemble. We consider first the ensemble where cities are placed in Euclidean space. We investigate how the optimum tour length scales with number of cities and with number of spatial dimensions. We then examine the analytical theory behind the random link ensemble, where distances between cities are independent random variables. Finally, we look at the related geometric issue of nearest neighbor distances, and find some remarkable universalities.

  2. Prediction and statistics of pseudoknots in RNA structures using exactly clustered stochastic simulations

    PubMed Central

    Xayaphoummine, A.; Bucher, T.; Thalmann, F.; Isambert, H.

    2003-01-01

    Ab initio RNA secondary structure predictions have long dismissed helices interior to loops, so-called pseudoknots, despite their structural importance. Here we report that many pseudoknots can be predicted through long-time-scale RNA-folding simulations, which follow the stochastic closing and opening of individual RNA helices. The numerical efficacy of these stochastic simulations relies on an 𝒪(n2) clustering algorithm that computes time averages over a continuously updated set of n reference structures. Applying this exact stochastic clustering approach, we typically obtain a 5- to 100-fold simulation speed-up for RNA sequences up to 400 bases, while the effective acceleration can be as high as 105-fold for short, multistable molecules (≤150 bases). We performed extensive folding statistics on random and natural RNA sequences and found that pseudoknots are distributed unevenly among RNA structures and account for up to 30% of base pairs in G+C-rich RNA sequences (online RNA-folding kinetics server including pseudoknots: http://kinefold.u-strasbg.fr). PMID:14676318

  3. Optimal sensor selection for noisy binary detection in stochastic pooling networks

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.; Li, Feng; Amblard, P.-O.; Grant, Alex J.

    2013-08-01

    Stochastic Pooling Networks (SPNs) are a useful model for understanding and explaining how naturally occurring encoding of stochastic processes can occur in sensor systems ranging from macroscopic social networks to neuron populations and nanoscale electronics. Due to the interaction of nonlinearity, random noise, and redundancy, SPNs support various unexpected emergent features, such as suprathreshold stochastic resonance, but most existing mathematical results are restricted to the simplest case where all sensors in a network are identical. Nevertheless, numerical results on information transmission have shown that in the presence of independent noise, the optimal configuration of a SPN is such that there should be partial heterogeneity in sensor parameters, such that the optimal solution includes clusters of identical sensors, where each cluster has different parameter values. In this paper, we consider a SPN model of a binary hypothesis detection task and show mathematically that the optimal solution for a specific bound on detection performance is also given by clustered heterogeneity, such that measurements made by sensors with identical parameters either should all be excluded from the detection decision or all included. We also derive an algorithm for numerically finding the optimal solution and illustrate its utility with several examples, including a model of parallel sensory neurons with Poisson firing characteristics.

  4. Identification and stochastic control of helicopter dynamic modes

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.; Bar-Shalom, Y.

    1983-01-01

    A general treatment of parameter identification and stochastic control for use on helicopter dynamic systems is presented. Rotor dynamic models, including specific applications to rotor blade flapping and the helicopter ground resonance problem are emphasized. Dynamic systems which are governed by periodic coefficients as well as constant coefficient models are addressed. The dynamic systems are modeled by linear state variable equations which are used in the identification and stochastic control formulation. The pure identification problem as well as the stochastic control problem which includes combined identification and control for dynamic systems is addressed. The stochastic control problem includes the effect of parameter uncertainty on the solution and the concept of learning and how this is affected by the control's duel effect. The identification formulation requires algorithms suitable for on line use and thus recursive identification algorithms are considered. The applications presented use the recursive extended kalman filter for parameter identification which has excellent convergence for systems without process noise.

  5. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model.

    PubMed

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-28

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  6. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    SciTech Connect

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-20

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the

  7. StochPy: a comprehensive, user-friendly tool for simulating stochastic biological processes.

    PubMed

    Maarleveld, Timo R; Olivier, Brett G; Bruggeman, Frank J

    2013-01-01

    Single-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models are indispensable for the study of phenotypic stochasticity in cellular decision-making and cell survival. There is a demand for versatile, stochastic modeling environments with extensive, preprogrammed statistics functions and plotting capabilities that hide the mathematics from the novice users and offers low-level programming access to the experienced user. Here we present StochPy (Stochastic modeling in Python), which is a flexible software tool for stochastic simulation in cell biology. It provides various stochastic simulation algorithms, SBML support, analyses of the probability distributions of molecule copy numbers and event waiting times, analyses of stochastic time series, and a range of additional statistical functions and plotting facilities for stochastic simulations. We illustrate the functionality of StochPy with stochastic models of gene expression, cell division, and single-molecule enzyme kinetics. StochPy has been successfully tested against the SBML stochastic test suite, passing all tests. StochPy is a comprehensive software package for stochastic simulation of the molecular control networks of living cells. It allows novice and experienced users to study stochastic phenomena in cell biology. The integration with other Python software makes StochPy both a user-friendly and easily extendible simulation tool.

  8. Stochastic Flow Modeling for Resin Transfer Moulding

    NASA Astrophysics Data System (ADS)

    Desplentere, Frederik; Verpoest, Ignaas; Lomov, Stepan

    2009-07-01

    Liquid moulding processes suffer from inherently present scatter in the textile reinforcement properties. This variability can lead to unwanted filling patterns within the mould resulting in bad parts. If thermoplastic resins are used with the in-situ polymerisation technique, an additional difficulty appears. The time window to inject the material is small if industrial processing parameters are used (<5 minutes). To model the stochastic nature of RTM, Darcy's description of the mould filling process has been used with the permeability distribution of the preform given as a random field. The random field of the permeability is constructed as a correlated field with an exponential correlation function. Optical microscopy and X-ray micro-CT have been used to study the stochastic parameters of the geometry for 2D and 3D woven textile preforms. The parameters describing the random permeability field (average, standard deviation and correlation length) are identified based on the stochastic parameters of the geometry for the preforms, analytical estimations and CFD modelling of the permeability. In order to implement the random field for the permeability and the variability for the resin viscosity, an add-on to the mould filling simulation software PAM-RTM™ has been developed. This analysis has been validated on case studies.

  9. Multivariable integration method for estimating sea surface salinity in coastal waters from in situ data and remotely sensed data using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Meiling; Liu, Xiangnan; Liu, Da; Ding, Chao; Jiang, Jiale

    2015-02-01

    A random forest (RF) model was created to estimate sea surface salinity (SSS) in the Hong Kong Sea, China, by integrating in situ and remotely sensed data. Optical remotely sensed data from China's HJ-1 satellite and in situ data were collected. The prediction model of salinity was developed by in situ environmental variables in the ocean, namely sea surface temperature (SST), pH, total inorganic nitrogen (TIN) and Chl-a, which are strongly related to SSS according to Pearson's correlation analysis. The large-scale SSS was estimated using the established salinity model with the same input parameters. The ordinary kriging interpolation using in situ data and the retrieval model based on remotely sensed data were developed to obtain the large-scale input parameters of the model. The different number of trees in the forest (ntree) and the number of features at each node (mtry) were adjusted in the RF model. The results showed that an optimum RF model was obtained with mtry=32 and ntree=2000, and the most important variable of the model for SSS prediction was SST, followed by TIN, Chl-a and pH. Such an RF model was successful in evaluating the temporal-spatial distribution of SSS and had a relatively low estimation error. The root mean square error (RMSE) was less than 2.0 psu, the mean absolute error (MAE) was below 1.5 psu, and the absolute percent error (APE) was lower than 5%. The final RF salinity model was then compared with a multiple linear regression model (MLR), a back-propagation artificial neural network model, and a classification and regression trees (CART) model. The RF had a lower estimation error than the other three models. In addition, the RF model was used extensively under different periods and could be universal. This demonstrated that the RF algorithm has the capability to estimate SSS in coastal waters by integrating in situ and remotely sensed data.

  10. Efficient simulation of stochastic chemical kinetics with the Stochastic Bulirsch-Stoer extrapolation method

    PubMed Central

    2014-01-01

    Background Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. Results In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie’s stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. Conclusions The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations. PMID:24939084

  11. Planning with Continuous Resources in Stochastic Domains

    NASA Technical Reports Server (NTRS)

    Mausam, Mausau; Benazera, Emmanuel; Brafman, Roneu; Hansen, Eric

    2005-01-01

    We consider the problem of optimal planning in stochastic domains with metric resource constraints. Our goal is to generate a policy whose expected sum of rewards is maximized for a given initial state. We consider a general formulation motivated by our application domain--planetary exploration--in which the choice of an action at each step may depend on the current resource levels. We adapt the forward search algorithm AO* to handle our continuous state space efficiently.

  12. Stochastic Prognostics for Rolling Element Bearings

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kurfess, T. R.; Liang, S. Y.

    2000-09-01

    The capability to accurately predict the remaining life of a rolling element bearing is prerequisite to the optimal maintenance of rotating machinery performance in terms of cost and productivity. Due to the probabilistic nature of bearing integrity and operation condition, reliable estimation of a bearing's remaining life presents a challenging aspect in the area of maintenance optimisation and catastrophic failure avoidance. Previous study has developed an adaptive prognostic methodology to estimate the rate of bearing defect growth based on a deterministic defect-propagation model. However, deterministic models are inadequate in addressing the stochastic nature of defect-propagation. In this paper, a stochastic defect-propagation model is established by instituting a lognormal random variable in a deterministic defect-propagation rate model. The resulting stochastic model is calibrated on-line by a recursive least-squares (RLS) approach without the requirement of a priori knowledge on bearing characteristics. An augmented stochastic differential equation vector is developed with the consideration of model uncertainties, parameter estimation errors, and diagnostic model inaccuracies. It involves two ordinary differential equations for the first and second moments of its random variables. Solving the two equations gives the mean path of defect propagation and its dispersion at any instance. This approach is suitable for on-line monitoring, remaining life prediction, and decision making for optimal maintenance scheduling. The methodology has been verified by numerical simulations and the experimental testing of bearing fatigue life.

  13. Stochastic Galerkin methods for the steady-state Navier–Stokes equations

    SciTech Connect

    Sousedík, Bedřich; Elman, Howard C.

    2016-07-01

    We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmark problems.

  14. A Deep Stochastic Model for Detecting Community in Complex Networks

    NASA Astrophysics Data System (ADS)

    Fu, Jingcheng; Wu, Jianliang

    2017-01-01

    Discovering community structures is an important step to understanding the structure and dynamics of real-world networks in social science, biology and technology. In this paper, we develop a deep stochastic model based on non-negative matrix factorization to identify communities, in which there are two sets of parameters. One is the community membership matrix, of which the elements in a row correspond to the probabilities of the given node belongs to each of the given number of communities in our model, another is the community-community connection matrix, of which the element in the i-th row and j-th column represents the probability of there being an edge between a randomly chosen node from the i-th community and a randomly chosen node from the j-th community. The parameters can be evaluated by an efficient updating rule, and its convergence can be guaranteed. The community-community connection matrix in our model is more precise than the community-community connection matrix in traditional non-negative matrix factorization methods. Furthermore, the method called symmetric nonnegative matrix factorization, is a special case of our model. Finally, based on the experiments on both synthetic and real-world networks data, it can be demonstrated that our algorithm is highly effective in detecting communities.

  15. Estimating stepwise debromination pathways of polybrominated diphenyl ethers with an analogue Markov Chain Monte Carlo algorithm.

    PubMed

    Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An

    2014-11-01

    A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments.

  16. Primal and Dual Integrated Force Methods Used for Stochastic Analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.

    2005-01-01

    At the NASA Glenn Research Center, the primal and dual integrated force methods are being extended for the stochastic analysis of structures. The stochastic simulation can be used to quantify the consequence of scatter in stress and displacement response because of a specified variation in input parameters such as load (mechanical, thermal, and support settling loads), material properties (strength, modulus, density, etc.), and sizing design variables (depth, thickness, etc.). All the parameters are modeled as random variables with given probability distributions, means, and covariances. The stochastic response is formulated through a quadratic perturbation theory, and it is verified through a Monte Carlo simulation.

  17. On co-design of filter and fault estimator against randomly occurring nonlinearities and randomly occurring deception attacks

    NASA Astrophysics Data System (ADS)

    Hu, Jun; Liu, Steven; Ji, Donghai; Li, Shanqiang

    2016-07-01

    In this paper, the co-design problem of filter and fault estimator is studied for a class of time-varying non-linear stochastic systems subject to randomly occurring nonlinearities and randomly occurring deception attacks. Two mutually independent random variables obeying the Bernoulli distribution are employed to characterize the phenomena of the randomly occurring nonlinearities and randomly occurring deception attacks, respectively. By using the augmentation approach, the co-design problem of the robust filter and fault estimator is converted into the recursive filter design problem. A new compensation scheme is proposed such that, for both randomly occurring nonlinearities and randomly occurring deception attacks, an upper bound of the filtering error covariance is obtained and such an upper bound is minimized by properly designing the filter gain at each sampling instant. Moreover, the explicit form of the filter gain is given based on the solution to two Riccati-like difference equations. It is shown that the proposed co-design algorithm is of a recursive form that is suitable for online computation. Finally, a simulation example is given to illustrate the usefulness of the developed filtering approach.

  18. Research in Stochastic Processes.

    DTIC Science & Technology

    1985-09-01

    appear. G. Kallianpur, Finitely additive approach to nonlinear filtering, Proc. Bernoulli Soc. Conf. on Stochastic Processes, T. Hida , ed., Springer, to...Nov. 85, in Proc. Bernoulli Soc. Conf. on Stochastic Processes, T. Hida , ed., Springer, to appear. i. Preparation T. Hsing, Extreme value theory for...1507 Carroll, R.J., Spiegelman, C.H., Lan, K.K.G., Bailey , K.T. and Abbott, R.D., Errors in-variables for binary regression models, Aug.82. 1508

  19. Exact event-driven implementation for recurrent networks of stochastic perfect integrate-and-fire neurons.

    PubMed

    Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo

    2012-12-01

    In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

  20. Stochastic behavior of nanoscale dielectric wall buckling

    PubMed Central

    Friedman, Lawrence H.; Levin, Igor; Cook, Robert F.

    2016-01-01

    The random buckling patterns of nanoscale dielectric walls are analyzed using a nonlinear multi-scale stochastic method that combines experimental measurements with simulations. The dielectric walls, approximately 200 nm tall and 20 nm wide, consist of compliant, low dielectric constant (low-k) fins capped with stiff, compressively stressed TiN lines that provide the driving force for buckling. The deflections of the buckled lines exhibit sinusoidal pseudoperiodicity with amplitude fluctuation and phase decorrelation arising from stochastic variations in wall geometry, properties, and stress state at length scales shorter than the characteristic deflection wavelength of about 1000 nm. The buckling patterns are analyzed and modeled at two length scales: a longer scale (up to 5000 nm) that treats randomness as a longer-scale measurable quantity, and a shorter-scale (down to 20 nm) that treats buckling as a deterministic phenomenon. Statistical simulation is used to join the two length scales. Through this approach, the buckling model is validated and material properties and stress states are inferred. In particular, the stress state of TiN lines in three different systems is determined, along with the elastic moduli of low-k fins and the amplitudes of the small-scale random fluctuations in wall properties—all in the as-processed state. The important case of stochastic effects giving rise to buckling in a deterministically sub-critical buckling state is demonstrated. The nonlinear multiscale stochastic analysis provides guidance for design of low-k structures with acceptable buckling behavior and serves as a template for how randomness that is common to nanoscale phenomena might be measured and analyzed in other contexts. PMID:27330220