NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
Essays on variational approximation techniques for stochastic optimization problems
NASA Astrophysics Data System (ADS)
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.
Asymptotic problems for stochastic partial differential equations
NASA Astrophysics Data System (ADS)
Salins, Michael
Stochastic partial differential equations (SPDEs) can be used to model systems in a wide variety of fields including physics, chemistry, and engineering. The main SPDEs of interest in this dissertation are the semilinear stochastic wave equations which model the movement of a material with constant mass density that is exposed to both determinstic and random forcing. Cerrai and Freidlin have shown that on fixed time intervals, as the mass density of the material approaches zero, the solutions of the stochastic wave equation converge uniformly to the solutions of a stochastic heat equation, in probability. This is called the Smoluchowski-Kramers approximation. In Chapter 2, we investigate some of the multi-scale behaviors that these wave equations exhibit. In particular, we show that the Freidlin-Wentzell exit place and exit time asymptotics for the stochastic wave equation in the small noise regime can be approximated by the exit place and exit time asymptotics for the stochastic heat equation. We prove that the exit time and exit place asymptotics are characterized by quantities called quasipotentials and we prove that the quasipotentials converge. We then investigate the special case where the equation has a gradient structure and show that we can explicitly solve for the quasipotentials, and that the quasipotentials for the heat equation and wave equation are equal. In Chapter 3, we study the Smoluchowski-Kramers approximation in the case where the material is electrically charged and exposed to a magnetic field. Interestingly, if the system is frictionless, then the Smoluchowski-Kramers approximation does not hold. We prove that the Smoluchowski-Kramers approximation is valid for systems exposed to both a magnetic field and friction. Notably, we prove that the solutions to the second-order equations converge to the solutions of the first-order equation in an Lp sense. This strengthens previous results where convergence was proved in probability.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Stochastic model of cell rearrangements in convergent extension of ascidian notochord
NASA Astrophysics Data System (ADS)
Lubkin, Sharon; Backes, Tracy; Latterman, Russell; Small, Stephen
2007-03-01
We present a discrete stochastic cell based model of convergent extension of the ascidian notochord. Our work derives from research that clarifies the coupling of invagination and convergent extension in ascidian notochord morphogenesis (Odell and Munro, 2002). We have tested the roles of cell-cell adhesion, cell-extracellular matrix adhesion, random motion, and extension of individual cells, as well as the presence or absence of various tissue types, and determined which factors are necessary and/or sufficient for convergent extension.
Projection scheme for a reflected stochastic heat equation with additive noise
NASA Astrophysics Data System (ADS)
Higa, Arturo Kohatsu; Pettersson, Roger
2005-02-01
We consider a projection scheme as a numerical solution of a reflected stochastic heat equation driven by a space-time white noise. Convergence is obtained via a discrete contraction principle and known convergence results for numerical solutions of parabolic variational inequalities.
On convergence of the unscented Kalman-Bucy filter using contraction theory
NASA Astrophysics Data System (ADS)
Maree, J. P.; Imsland, L.; Jouffroy, J.
2016-06-01
Contraction theory entails a theoretical framework in which convergence of a nonlinear system can be analysed differentially in an appropriate contraction metric. This paper is concerned with utilising stochastic contraction theory to conclude on exponential convergence of the unscented Kalman-Bucy filter. The underlying process and measurement models of interest are Itô-type stochastic differential equations. In particular, statistical linearisation techniques are employed in a virtual-actual systems framework to establish deterministic contraction of the estimated expected mean of process values. Under mild conditions of bounded process noise, we extend the results on deterministic contraction to stochastic contraction of the estimated expected mean of the process state. It follows that for the regions of contraction, a result on convergence, and thereby incremental stability, is concluded for the unscented Kalman-Bucy filter. The theoretical concepts are illustrated in two case studies.
Gerencsér, Máté; Jentzen, Arnulf; Salimova, Diyora
2017-11-01
In a recent article (Jentzen et al. 2016 Commun. Math. Sci. 14 , 1477-1500 (doi:10.4310/CMS.2016.v14.n6.a1)), it has been established that, for every arbitrarily slow convergence speed and every natural number d ∈{4,5,…}, there exist d -dimensional stochastic differential equations with infinitely often differentiable and globally bounded coefficients such that no approximation method based on finitely many observations of the driving Brownian motion can converge in absolute mean to the solution faster than the given speed of convergence. In this paper, we strengthen the above result by proving that this slow convergence phenomenon also arises in two ( d =2) and three ( d =3) space dimensions.
NASA Astrophysics Data System (ADS)
Wan, Li; Zhou, Qinghua
2007-10-01
The stability property of stochastic hybrid bidirectional associate memory (BAM) neural networks with discrete delays is considered. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, the delay-independent sufficient conditions to guarantee the exponential stability of the equilibrium solution for such networks are given by using the nonnegative semimartingale convergence theorem.
Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations
2013-01-01
In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328
Stochastic optimization algorithms for barrier dividend strategies
NASA Astrophysics Data System (ADS)
Yin, G.; Song, Q. S.; Yang, H.
2009-01-01
This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.
Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.
2016-11-27
Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.
Study on the threshold of a stochastic SIR epidemic model and its extensions
NASA Astrophysics Data System (ADS)
Zhao, Dianli
2016-09-01
This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashyralyev, Allaberen; Okur, Ulker
In the present paper, the Crank-Nicolson difference scheme for the numerical solution of the stochastic parabolic equation with the dependent operator coefficient is considered. Theorem on convergence estimates for the solution of this difference scheme is established. In applications, convergence estimates for the solution of difference schemes for the numerical solution of three mixed problems for parabolic equations are obtained. The numerical results are given.
Coron, Camille
2016-01-01
We are interested in the long-time behavior of a diploid population with sexual reproduction and randomly varying population size, characterized by its genotype composition at one bi-allelic locus. The population is modeled by a 3-dimensional birth-and-death process with competition, weak cooperation and Mendelian reproduction. This stochastic process is indexed by a scaling parameter K that goes to infinity, following a large population assumption. When the individual birth and natural death rates are of order K, the sequence of stochastic processes indexed by K converges toward a new slow-fast dynamics with variable population size. We indeed prove the convergence toward 0 of a fast variable giving the deviation of the population from quasi Hardy-Weinberg equilibrium, while the sequence of slow variables giving the respective numbers of occurrences of each allele converges toward a 2-dimensional diffusion process that reaches (0,0) almost surely in finite time. The population size and the proportion of a given allele converge toward a Wright-Fisher diffusion with stochastically varying population size and diploid selection. We insist on differences between haploid and diploid populations due to population size stochastic variability. Using a non trivial change of variables, we study the absorption of this diffusion and its long time behavior conditioned on non-extinction. In particular we prove that this diffusion starting from any non-trivial state and conditioned on not hitting (0,0) admits a unique quasi-stationary distribution. We give numerical approximations of this quasi-stationary behavior in three biologically relevant cases: neutrality, overdominance, and separate niches.
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
On Two-Scale Modelling of Heat and Mass Transfer
NASA Astrophysics Data System (ADS)
Vala, J.; Št'astník, S.
2008-09-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
K-Minimax Stochastic Programming Problems
NASA Astrophysics Data System (ADS)
Nedeva, C.
2007-10-01
The purpose of this paper is a discussion of a numerical procedure based on the simplex method for stochastic optimization problems with partially known distribution functions. The convergence of this procedure is proved by the condition on dual problems.
Stochastic Evolution Equations Driven by Fractional Noises
2016-11-28
rate of convergence to zero or the error and the limit in distribution of the error fluctuations. We have studied time discrete numerical schemes...error fluctuations. We have studied time discrete numerical schemes based on Taylor expansions for rough differential equations and for stochastic...variations of the time discrete Taylor schemes for rough differential equations and for stochastic differential equations driven by fractional Brownian
Numerical solution of the stochastic parabolic equation with the dependent operator coefficient
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashyralyev, Allaberen; Department of Mathematics, ITTU, Ashgabat; Okur, Ulker
2015-09-18
In the present paper, a single step implicit difference scheme for the numerical solution of the stochastic parabolic equation with the dependent operator coefficient is presented. Theorem on convergence estimates for the solution of this difference scheme is established. In applications, this abstract result permits us to obtain the convergence estimates for the solution of difference schemes for the numerical solution of initial boundary value problems for parabolic equations. The theoretical statements for the solution of this difference scheme are supported by the results of numerical experiments.
Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk
2017-06-15
In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
Convergence Rates of Finite Difference Stochastic Approximation Algorithms
2016-06-01
dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It
Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik
2012-02-01
Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.
Multidimensional stochastic approximation using locally contractive functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
A Functional Central Limit Theorem for the Becker-Döring Model
NASA Astrophysics Data System (ADS)
Sun, Wen
2018-04-01
We investigate the fluctuations of the stochastic Becker-Döring model of polymerization when the initial size of the system converges to infinity. A functional central limit problem is proved for the vector of the number of polymers of a given size. It is shown that the stochastic process associated to fluctuations is converging to the strong solution of an infinite dimensional stochastic differential equation (SDE) in a Hilbert space. We also prove that, at equilibrium, the solution of this SDE is a Gaussian process. The proofs are based on a specific representation of the evolution equations, the introduction of a convenient Hilbert space and several technical estimates to control the fluctuations, especially of the first coordinate which interacts with all components of the infinite dimensional vector representing the state of the process.
NASA Astrophysics Data System (ADS)
Camargo, F. R.; Henson, B.
2015-02-01
The notion of that more or less of a physical feature affects in different degrees the users' impression with regard to an underlying attribute of a product has frequently been applied in affective engineering. However, those attributes exist only as a premise that cannot directly be measured and, therefore, inferences based on their assessment are error-prone. To establish and improve measurement of latent attributes it is presented in this paper the concept of a stochastic framework using the Rasch model for a wide range of independent variables referred to as an item bank. Based on an item bank, computerized adaptive testing (CAT) can be developed. A CAT system can converge into a sequence of items bracketing to convey information at a user's particular endorsement level. It is through item banking and CAT that the financial benefits of using the Rasch model in affective engineering can be realised.
NASA Astrophysics Data System (ADS)
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-06-01
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-06-27
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.
NASA Astrophysics Data System (ADS)
Menezes, G.; Svaiter, N. F.
2006-07-01
We use the method of stochastic quantization in a topological field theory defined in an Euclidean space, assuming a Langevin equation with a memory kernel. We show that our procedure for the Abelian Chern-Simons theory converges regardless of the nature of the Chern-Simons coefficient.
Newton's method for nonlinear stochastic wave equations driven by one-dimensional Brownian motion.
Leszczynski, Henryk; Wrzosek, Monika
2017-02-01
We consider nonlinear stochastic wave equations driven by one-dimensional white noise with respect to time. The existence of solutions is proved by means of Picard iterations. Next we apply Newton's method. Moreover, a second-order convergence in a probabilistic sense is demonstrated.
Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models
NASA Astrophysics Data System (ADS)
Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.
2018-02-01
In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Jianbo, E-mail: jianbocui@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Liu, Zhihui, E-mail: liuzhihui@lsec.cc.ac.cn
We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.
Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G
2007-05-24
The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.
Decentralized Network Interdiction Games
2015-12-31
approach is termed as the sample average approximation ( SAA ) method, and theories on the asymptotic convergence to the original problem’s optimal...used in the SAA method’s convergence. While we provided detailed proof of such convergence in [P3], a side benefit of the proof is that it weakens the...conditions required when applying the general SAA approach to the block-structured stochastic programming problem 17. As the conditions known in the
The phenotypic equilibrium of cancer cells: From average-level stability to path-wise convergence.
Niu, Yuanling; Wang, Yue; Zhou, Da
2015-12-07
The phenotypic equilibrium, i.e. heterogeneous population of cancer cells tending to a fixed equilibrium of phenotypic proportions, has received much attention in cancer biology very recently. In the previous literature, some theoretical models were used to predict the experimental phenomena of the phenotypic equilibrium, which were often explained by different concepts of stabilities of the models. Here we present a stochastic multi-phenotype branching model by integrating conventional cellular hierarchy with phenotypic plasticity mechanisms of cancer cells. Based on our model, it is shown that: (i) our model can serve as a framework to unify the previous models for the phenotypic equilibrium, and then harmonizes the different kinds of average-level stabilities proposed in these models; and (ii) path-wise convergence of our model provides a deeper understanding to the phenotypic equilibrium from stochastic point of view. That is, the emergence of the phenotypic equilibrium is rooted in the stochastic nature of (almost) every sample path, the average-level stability just follows from it by averaging stochastic samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Convergence Coefficient across Political Systems
Schofield, Norman
2013-01-01
Formal work on the electoral model often suggests that parties or candidates should locate themselves at the electoral mean. Recent research has found no evidence of such convergence. In order to explain nonconvergence, the stochastic electoral model is extended by including estimates of electoral valence. We introduce the notion of a convergence coefficient, c. It has been shown that high values of c imply that there is a significant centrifugal tendency acting on parties. We used electoral surveys to construct a stochastic valence model of the the elections in various countries. We find that the convergence coefficient varies across elections in a country, across countries with similar regimes, and across political regimes. In some countries, the centripetal tendency leads parties to converge to the electoral mean. In others the centrifugal tendency dominates and some parties locate far from the electoral mean. In particular, for countries with proportional electoral systems, namely, Israel, Turkey, and Poland, the centrifugal tendency is very high. In the majoritarian polities of the United States and Great Britain, the centrifugal tendency is very low. In anocracies, the autocrat imposes limitations on how far from the origin the opposition parties can move. PMID:24385886
The convergence coefficient across political systems.
Gallego, Maria; Schofield, Norman
2013-01-01
Formal work on the electoral model often suggests that parties or candidates should locate themselves at the electoral mean. Recent research has found no evidence of such convergence. In order to explain nonconvergence, the stochastic electoral model is extended by including estimates of electoral valence. We introduce the notion of a convergence coefficient, c. It has been shown that high values of c imply that there is a significant centrifugal tendency acting on parties. We used electoral surveys to construct a stochastic valence model of the the elections in various countries. We find that the convergence coefficient varies across elections in a country, across countries with similar regimes, and across political regimes. In some countries, the centripetal tendency leads parties to converge to the electoral mean. In others the centrifugal tendency dominates and some parties locate far from the electoral mean. In particular, for countries with proportional electoral systems, namely, Israel, Turkey, and Poland, the centrifugal tendency is very high. In the majoritarian polities of the United States and Great Britain, the centrifugal tendency is very low. In anocracies, the autocrat imposes limitations on how far from the origin the opposition parties can move.
Inversion of Robin coefficient by a spectral stochastic finite element approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Bangti; Zou Jun
2008-03-01
This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique
Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep
2015-01-01
In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032
Sparse Learning with Stochastic Composite Optimization.
Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei
2017-06-01
In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).
A moment-convergence method for stochastic analysis of biochemical reaction networks.
Zhang, Jiajun; Nie, Qing; Zhou, Tianshou
2016-05-21
Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in terms of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.
A Hybrid Stochastic-Neuro-Fuzzy Model-Based System for In-Flight Gas Turbine Engine Diagnostics
2001-04-05
Margin (ADM) and (ii) Fault Detection Margin (FDM). Key Words: ANFIS, Engine Health Monitoring , Gas Path Analysis, and Stochastic Analysis Adaptive Network...The paper illustrates the application of a hybrid Stochastic- Fuzzy -Inference Model-Based System (StoFIS) to fault diagnostics and prognostics for both...operational history monitored on-line by the engine health management (EHM) system. To capture the complex functional relationships between different
Stochastic quantization of (λϕ4)d scalar theory: Generalized Langevin equation with memory kernel
NASA Astrophysics Data System (ADS)
Menezes, G.; Svaiter, N. F.
2007-02-01
The method of stochastic quantization for a scalar field theory is reviewed. A brief survey for the case of self-interacting scalar field, implementing the stochastic perturbation theory up to the one-loop level, is presented. Then, it is introduced a colored random noise in the Einstein's relations, a common prescription employed by one of the stochastic regularizations, to control the ultraviolet divergences of the theory. This formalism is extended to the case where a Langevin equation with a memory kernel is used. It is shown that, maintaining the Einstein's relations with a colored noise, there is convergence to a non-regularized theory.
Fast smooth second-order sliding mode control for stochastic systems with enumerable coloured noises
NASA Astrophysics Data System (ADS)
Yang, Peng-fei; Fang, Yang-wang; Wu, You-li; Zhang, Dan-xu; Xu, Yang
2018-01-01
A fast smooth second-order sliding mode control is presented for a class of stochastic systems driven by enumerable Ornstein-Uhlenbeck coloured noises with time-varying coefficients. Instead of treating the noise as bounded disturbance, the stochastic control techniques are incorporated into the design of the control. The finite-time mean-square practical stability and finite-time mean-square practical reachability are first introduced. Then the prescribed sliding variable dynamic is presented. The sufficient condition guaranteeing its finite-time convergence is given and proved using stochastic Lyapunov-like techniques. The proposed sliding mode controller is applied to a second-order nonlinear stochastic system. Simulation results are given comparing with smooth second-order sliding mode control to validate the analysis.
FINITE-STATE APPROXIMATIONS TO DENUMERABLE-STATE DYNAMIC PROGRAMS,
AIR FORCE OPERATIONS, LOGISTICS), (*INVENTORY CONTROL, DYNAMIC PROGRAMMING), (*DYNAMIC PROGRAMMING, APPROXIMATION(MATHEMATICS)), INVENTORY CONTROL, DECISION MAKING, STOCHASTIC PROCESSES, GAME THEORY, ALGORITHMS, CONVERGENCE
Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case
NASA Astrophysics Data System (ADS)
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.
2010-06-01
Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.
Monte-Carlo simulation of a stochastic differential equation
NASA Astrophysics Data System (ADS)
Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG
2017-12-01
For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.
Mechanical Autonomous Stochastic Heat Engine
NASA Astrophysics Data System (ADS)
Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara
2016-07-01
Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.
Mechanical Autonomous Stochastic Heat Engine.
Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara
2016-07-01
Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.
Fractional Stochastic Differential Equations Satisfying Fluctuation-Dissipation Theorem
NASA Astrophysics Data System (ADS)
Li, Lei; Liu, Jian-Guo; Lu, Jianfeng
2017-10-01
We propose in this work a fractional stochastic differential equation (FSDE) model consistent with the over-damped limit of the generalized Langevin equation model. As a result of the `fluctuation-dissipation theorem', the differential equations driven by fractional Brownian noise to model memory effects should be paired with Caputo derivatives, and this FSDE model should be understood in an integral form. We establish the existence of strong solutions for such equations and discuss the ergodicity and convergence to Gibbs measure. In the linear forcing regime, we show rigorously the algebraic convergence to Gibbs measure when the `fluctuation-dissipation theorem' is satisfied, and this verifies that satisfying `fluctuation-dissipation theorem' indeed leads to the correct physical behavior. We further discuss possible approaches to analyze the ergodicity and convergence to Gibbs measure in the nonlinear forcing regime, while leave the rigorous analysis for future works. The FSDE model proposed is suitable for systems in contact with heat bath with power-law kernel and subdiffusion behaviors.
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Fast smooth second-order sliding mode control for systems with additive colored noises.
Yang, Pengfei; Fang, Yangwang; Wu, Youli; Liu, Yunxia; Zhang, Danxu
2017-01-01
In this paper, a fast smooth second-order sliding mode control is presented for a class of stochastic systems with enumerable Ornstein-Uhlenbeck colored noises. The finite-time mean-square practical stability and finite-time mean-square practical reachability are first introduced. Instead of treating the noise as bounded disturbance, the stochastic control techniques are incorporated into the design of the controller. The finite-time convergence of the prescribed sliding variable dynamics system is proved by using stochastic Lyapunov-like techniques. Then the proposed sliding mode controller is applied to a second-order nonlinear stochastic system. Simulation results are presented comparing with smooth second-order sliding mode control to validate the analysis.
A moment-convergence method for stochastic analysis of biochemical reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiajun; Nie, Qing; Zhou, Tianshou, E-mail: mcszhtsh@mail.sysu.edu.cn
Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in termsmore » of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.« less
OpenMC In Situ Source Convergence Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee
2016-05-07
We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less
Particle Swarm Optimization algorithms for geophysical inversion, practical hints
NASA Astrophysics Data System (ADS)
Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.
2008-12-01
PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.
Deterministic analysis of extrinsic and intrinsic noise in an epidemiological model.
Bayati, Basil S
2016-05-01
We couple a stochastic collocation method with an analytical expansion of the canonical epidemiological master equation to analyze the effects of both extrinsic and intrinsic noise. It is shown that depending on the distribution of the extrinsic noise, the master equation yields quantitatively different results compared to using the expectation of the distribution for the stochastic parameter. This difference is incident to the nonlinear terms in the master equation, and we show that the deviation away from the expectation of the extrinsic noise scales nonlinearly with the variance of the distribution. The method presented here converges linearly with respect to the number of particles in the system and exponentially with respect to the order of the polynomials used in the stochastic collocation calculation. This makes the method presented here more accurate than standard Monte Carlo methods, which suffer from slow, nonmonotonic convergence. In epidemiological terms, the results show that extrinsic fluctuations should be taken into account since they effect the speed of disease breakouts and that the gamma distribution should be used to model the basic reproductive number.
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
Some functional limit theorems for compound Cox processes
NASA Astrophysics Data System (ADS)
Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.
2016-06-01
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Some functional limit theorems for compound Cox processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.
2016-06-08
An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
Evolutionary stability concepts in a stochastic environment
NASA Astrophysics Data System (ADS)
Zheng, Xiu-Deng; Li, Cong; Lessard, Sabin; Tao, Yi
2017-09-01
Over the past 30 years, evolutionary game theory and the concept of an evolutionarily stable strategy have been not only extensively developed and successfully applied to explain the evolution of animal behaviors, but also widely used in economics and social sciences. Nonetheless, the stochastic dynamical properties of evolutionary games in randomly fluctuating environments are still unclear. In this study, we investigate conditions for stochastic local stability of fixation states and constant interior equilibria in a two-phenotype model with random payoffs following pairwise interactions. Based on this model, we develop the concepts of stochastic evolutionary stability (SES) and stochastic convergence stability (SCS). We show that the condition for a pure strategy to be SES and SCS is more stringent than in a constant environment, while the condition for a constant mixed strategy to be SES is less stringent than the condition to be SCS, which is less stringent than the condition in a constant environment.
Identification and stochastic control of helicopter dynamic modes
NASA Technical Reports Server (NTRS)
Molusis, J. A.; Bar-Shalom, Y.
1983-01-01
A general treatment of parameter identification and stochastic control for use on helicopter dynamic systems is presented. Rotor dynamic models, including specific applications to rotor blade flapping and the helicopter ground resonance problem are emphasized. Dynamic systems which are governed by periodic coefficients as well as constant coefficient models are addressed. The dynamic systems are modeled by linear state variable equations which are used in the identification and stochastic control formulation. The pure identification problem as well as the stochastic control problem which includes combined identification and control for dynamic systems is addressed. The stochastic control problem includes the effect of parameter uncertainty on the solution and the concept of learning and how this is affected by the control's duel effect. The identification formulation requires algorithms suitable for on line use and thus recursive identification algorithms are considered. The applications presented use the recursive extended kalman filter for parameter identification which has excellent convergence for systems without process noise.
Evolution of probability densities in stochastic coupled map lattices
NASA Astrophysics Data System (ADS)
Losson, Jérôme; Mackey, Michael C.
1995-08-01
This paper describes the statistical properties of coupled map lattices subjected to the influence of stochastic perturbations. The stochastic analog of the Perron-Frobenius operator is derived for various types of noise. When the local dynamics satisfy rather mild conditions, this equation is shown to possess either stable, steady state solutions (i.e., a stable invariant density) or density limit cycles. Convergence of the phase space densities to these limit cycle solutions explains the nonstationary behavior of statistical quantifiers at equilibrium. Numerical experiments performed on various lattices of tent, logistic, and shift maps with diffusivelike interelement couplings are examined in light of these theoretical results.
Graph Theory-Based Pinning Synchronization of Stochastic Complex Dynamical Networks.
Li, Xiao-Jian; Yang, Guang-Hong
2017-02-01
This paper is concerned with the adaptive pinning synchronization problem of stochastic complex dynamical networks (CDNs). Based on algebraic graph theory and Lyapunov theory, pinning controller design conditions are derived, and the rigorous convergence analysis of synchronization errors in the probability sense is also conducted. Compared with the existing results, the topology structures of stochastic CDN are allowed to be unknown due to the use of graph theory. In particular, it is shown that the selection of nodes for pinning depends on the unknown lower bounds of coupling strengths. Finally, an example on a Chua's circuit network is given to validate the effectiveness of the theoretical results.
An implicit iterative algorithm with a tuning parameter for Itô Lyapunov matrix equations
NASA Astrophysics Data System (ADS)
Zhang, Ying; Wu, Ai-Guo; Sun, Hui-Jie
2018-01-01
In this paper, an implicit iterative algorithm is proposed for solving a class of Lyapunov matrix equations arising in Itô stochastic linear systems. A tuning parameter is introduced in this algorithm, and thus the convergence rate of the algorithm can be changed. Some conditions are presented such that the developed algorithm is convergent. In addition, an explicit expression is also derived for the optimal tuning parameter, which guarantees that the obtained algorithm achieves its fastest convergence rate. Finally, numerical examples are employed to illustrate the effectiveness of the given algorithm.
Analytical solution of a stochastic model of risk spreading with global coupling
NASA Astrophysics Data System (ADS)
Morita, Satoru; Yoshimura, Jin
2013-11-01
We study a stochastic matrix model to understand the mechanics of risk spreading (or bet hedging) by dispersion. Up to now, this model has been mostly dealt with numerically, except for the well-mixed case. Here, we present an analytical result that shows that optimal dispersion leads to Zipf's law. Moreover, we found that the arithmetic ensemble average of the total growth rate converges to the geometric one, because the sample size is finite.
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
Stochasticity in numerical solutions of the nonlinear Schroedinger equation
NASA Technical Reports Server (NTRS)
Shen, Mei-Mei; Nicholson, D. R.
1987-01-01
The cubically nonlinear Schroedinger equation is an important model of nonlinear phenomena in fluids and plasmas. Numerical solutions in a spatially periodic system commonly involve truncation to a finite number of Fourier modes. These solutions are found to be stochastic in the sense that the largest Liapunov exponent is positive. As the number of modes is increased, the size of this exponent appears to converge to zero, in agreement with the recent demonstration of the integrability of the spatially periodic case.
Multi-fidelity stochastic collocation method for computation of statistical moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
Integration of progressive hedging and dual decomposition in stochastic integer programs
Watson, Jean -Paul; Guo, Ge; Hackebeil, Gabriel; ...
2015-04-07
We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob A.
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASA's Dawn mission. The Dawn trajectory was designed with the DDP-based Static/Dynamic Optimal Control algorithm used in the Mystic software.1 Another recently developed method, Hybrid Differential Dynamic Programming (HDDP),2, 3 is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu
We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less
Emerging interdisciplinary fields in the coming intelligence/convergence era
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.
2012-09-01
Dramatic advances are in the horizon resulting from rapid pace of development of several technologies, including, computing, communication, mobile, robotic, and interactive technologies. These advances, along with the trend towards convergence of traditional engineering disciplines with physical, life and other science disciplines will result in the development of new interdisciplinary fields, as well as in new paradigms for engineering practice in the coming intelligence/convergence era (post-information age). The interdisciplinary fields include Cyber Engineering, Living Systems Engineering, Biomechatronics/Robotics Engineering, Knowledge Engineering, Emergent/Complexity Engineering, and Multiscale Systems engineering. The paper identifies some of the characteristics of the intelligence/convergence era, gives broad definition of convergence, describes some of the emerging interdisciplinary fields, and lists some of the academic and other organizations working in these disciplines. The need is described for establishing a Hierarchical Cyber-Physical Ecosystem for facilitating interdisciplinary collaborations, and accelerating development of skilled workforce in the new fields. The major components of the ecosystem are listed. The new interdisciplinary fields will yield critical advances in engineering practice, and help in addressing future challenges in broad array of sectors, from manufacturing to energy, transportation, climate, and healthcare. They will also enable building large future complex adaptive systems-of-systems, such as intelligent multimodal transportation systems, optimized multi-energy systems, intelligent disaster prevention systems, and smart cities.
Distributed Adaptive Neural Control for Stochastic Nonlinear Multiagent Systems.
Wang, Fang; Chen, Bing; Lin, Chong; Li, Xuehua
2016-11-14
In this paper, a consensus tracking problem of nonlinear multiagent systems is investigated under a directed communication topology. All the followers are modeled by stochastic nonlinear systems in nonstrict feedback form, where nonlinearities and stochastic disturbance terms are totally unknown. Based on the structural characteristic of neural networks (in Lemma 4), a novel distributed adaptive neural control scheme is put forward. The raised control method not only effectively handles unknown nonlinearities in nonstrict feedback systems, but also copes with the interactions among agents and coupling terms. Based on the stochastic Lyapunov functional method, it is indicated that all the signals of the closed-loop system are bounded in probability and all followers' outputs are convergent to a neighborhood of the output of leader. At last, the efficiency of the control method is testified by a numerical example.
NASA Astrophysics Data System (ADS)
McCarthy, S.; Rachinskii, D.
2011-01-01
We describe two Euler type numerical schemes obtained by discretisation of a stochastic differential equation which contains the Preisach memory operator. Equations of this type are of interest in areas such as macroeconomics and terrestrial hydrology where deterministic models containing the Preisach operator have been developed but do not fully encapsulate stochastic aspects of the area. A simple price dynamics model is presented as one motivating example for our studies. Some numerical evidence is given that the two numerical schemes converge to the same limit as the time step decreases. We show that the Preisach term introduces a damping effect which increases on the parts of the trajectory demonstrating a stronger upwards or downwards trend. The results are preliminary to a broader programme of research of stochastic differential equations with the Preisach hysteresis operator.
Data Analysis and Non-local Parametrization Strategies for Organized Atmospheric Convection
NASA Astrophysics Data System (ADS)
Brenowitz, Noah D.
The intrinsically multiscale nature of moist convective processes in the atmosphere complicates scientific understanding, and, as a result, current coarse-resolution climate models poorly represent convective variability in the tropics. This dissertation addresses this problem by 1) studying new cumulus convective closures in a pair of idealized models for tropical moist convection, and 2) developing innovative strategies for analyzing high-resolution numerical simulations of organized convection. The first two chapters of this dissertation revisit a historical controversy about the use of convective closures based on the large-scale wind field or moisture convergence. In the first chapter, a simple coarse resolution stochastic model for convective inhibition is designed which includes the non-local effects of wind-convergence on convective activity. This model is designed to replicate the convective dynamics of a typical coarse-resolution climate prediction model. The non-local convergence coupling is motivated by the phenomena of gregarious convection, whereby mesoscale convective systems emit gravity waves which can promote convection at a distant locations. Linearized analysis and nonlinear simulations show that this convergence coupling allows for increased interaction between cumulus convection and the large-scale circulation, but does not suffer from the deleterious behavior of traditional moisture-convergence closures. In the second chapter, the non-local convergence coupling idea is extended to an idealized stochastic multicloud model. This model allows for stochastic transitions between three distinct cloud types, and non-local convergence coupling is most beneficial when applied to the transition from shallow to deep convection. This is consistent with recent observational and numerical modeling evidence, and there is a growing body of work highlighting the importance of this transition in tropical meteorology. In a series of idealized Walker cell simulations, convergence coupling enhances the persistence of Kelvin wave analogs in dry regions of the domain while leaving the dynamics in moist regions largely unaltered. The final chapter of this dissertation presents a technique for analyzing the variability of a direct numerical simulation of Rayleigh-Benard convection at large aspect ratio, which is a basic prototype of convective organization. High resolution numerical models are an invaluable tool for studying atmospheric dynamics, but modern data analysis techniques struggle with the extreme size of the model outputs and the trivial symmetries of the underlying dynamical systems (e.g. shift-invariance). A new data analysis approach which is invariant to spatial symmetries is derived by combining a quasi-Lagrangian description of the data, time-lagged embedding, and manifold learning techniques. The quasi-Lagrangian description is obtained by a straightforward isothermal binning procedure, which compresses the data in a dynamically-aware fashion. A small number of orthogonal modes returned by this algorithm are able to explain the highly intermittent dynamics of the bulk heat transfer, as quantified by the Nusselt Number.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed. PMID:27926946
Stayton, C. Tristan
2015-01-01
Convergent evolution is central to the study of life's evolutionary history. Researchers have documented the ubiquity of convergence and have used this ubiquity to make inferences about the nature of limits on evolution. However, these inferences are compromised by unrecognized inconsistencies in the definitions, measures, significance tests and inferred causes of convergent evolution. I review these inconsistencies and provide recommendations for standardizing studies of convergence. A fundamental dichotomy exists between definitions that describe convergence as a pattern and those that describe it as a pattern caused by a particular process. When this distinction is not acknowledged it becomes easy to assume that a pattern of convergence indicates that a particular process has been active, leading researchers away from alternative explanations. Convergence is not necessarily caused by limits to evolution, either adaptation or constraint; even substantial amounts of convergent evolution can be generated by a purely stochastic process. In the absence of null models, long lists of examples of convergent events do not necessarily indicate that convergence or any evolutionary process is ubiquitous throughout the history of life. Pattern-based definitions of convergence, coupled with quantitative measures and null models, must be applied before drawing inferences regarding large-scale limits to evolution. PMID:26640646
NASA Astrophysics Data System (ADS)
Herath, Narmada; Del Vecchio, Domitilla
2018-03-01
Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.
Single-particle stochastic heat engine.
Rana, Shubhashis; Pal, P S; Saha, Arnab; Jayannavar, A M
2014-10-01
We have performed an extensive analysis of a single-particle stochastic heat engine constructed by manipulating a Brownian particle in a time-dependent harmonic potential. The cycle consists of two isothermal steps at different temperatures and two adiabatic steps similar to that of a Carnot engine. The engine shows qualitative differences in inertial and overdamped regimes. All the thermodynamic quantities, including efficiency, exhibit strong fluctuations in a time periodic steady state. The fluctuations of stochastic efficiency dominate over the mean values even in the quasistatic regime. Interestingly, our system acts as an engine provided the temperature difference between the two reservoirs is greater than a finite critical value which in turn depends on the cycle time and other system parameters. This is supported by our analytical results carried out in the quasistatic regime. Our system works more reliably as an engine for large cycle times. By studying various model systems, we observe that the operational characteristics are model dependent. Our results clearly rule out any universal relation between efficiency at maximum power and temperature of the baths. We have also verified fluctuation relations for heat engines in time periodic steady state.
NASA Astrophysics Data System (ADS)
Gottwald, Georg; Melbourne, Ian
2013-04-01
Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir
2014-08-01
In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less
Development of the Scale for "Convergence Thinking" in Engineering
ERIC Educational Resources Information Center
Park, Sungmi
2016-01-01
Purpose: The purpose of this paper is to define the concept of "convergence thinking" as a trading zone for knowledge fusion in the engineering field, and develops its measuring scale. Design/ Methodology/Approach: Based on results from literature review, this study clarifies a theoretical ground for "convergence thinking."…
Solarin, Sakiru Adebola; Gil-Alana, Luis Alberiko; Al-Mulali, Usama
2018-04-13
In this article, we have examined the hypothesis of convergence of renewable energy consumption in 27 OECD countries. However, instead of relying on classical techniques, which are based on the dichotomy between stationarity I(0) and nonstationarity I(1), we consider a more flexible approach based on fractional integration. We employ both parametric and semiparametric techniques. Using parametric methods, evidence of convergence is found in the cases of Mexico, Switzerland and Sweden along with the USA, Portugal, the Czech Republic, South Korea and Spain, and employing semiparametric approaches, we found evidence of convergence in all these eight countries along with Australia, France, Japan, Greece, Italy and Poland. For the remaining 13 countries, even though the orders of integration of the series are smaller than one in all cases except Germany, the confidence intervals are so wide that we cannot reject the hypothesis of unit roots thus not finding support for the hypothesis of convergence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durran, Richard; Neate, Andrew; Truman, Aubrey
2008-03-15
We consider the Bohr correspondence limit of the Schroedinger wave function for an atomic elliptic state. We analyze this limit in the context of Nelson's stochastic mechanics, exposing an underlying deterministic dynamical system in which trajectories converge to Keplerian motion on an ellipse. This solves the long standing problem of obtaining Kepler's laws of planetary motion in a quantum mechanical setting. In this quantum mechanical setting, local mild instabilities occur in the Keplerian orbit for eccentricities greater than (1/{radical}(2)) which do not occur classically.
Issues and Strategies in Solving Multidisciplinary Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya
2013-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Control of stochastic sensitivity in a stabilization problem for gas discharge system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bashkirtseva, Irina
2015-11-30
We consider a nonlinear dynamic stochastic system with control. A problem of stochastic sensitivity synthesis of the equilibrium is studied. A mathematical technique of the solution of this problem is discussed. This technique is applied to the problem of the stabilization of the operating mode for the stochastic gas discharge system. We construct a feedback regulator that reduces the stochastic sensitivity of the equilibrium, suppresses large-amplitude oscillations, and provides a proper operation of this engineering device.
NASA Astrophysics Data System (ADS)
Ullah, Rahman; Faizullah, Faiz
2017-10-01
This investigation aims at studying a Euler-Maruyama (EM) approximate solutions scheme for stochastic differential equations (SDEs) in the framework of G-Brownian motion. Subject to the growth condition, it is shown that the EM solutions Z^q(t) are bounded, in particular, Z^q(t)\\in M_G^2([t_0,T];R^n) . Letting Z( t) as a unique solution to SDEs in the G-framework and utilizing the growth and Lipschitz conditions, the convergence of Z^q(t) to Z( t) is revealed. The Burkholder-Davis-Gundy (BDG) inequalities, Hölder's inequality, Gronwall's inequality and Doobs martingale's inequality are used to derive the results. In addition, without assuming a solution of the stated SDE, we have shown that the Euler-Maruyama approximation sequence {Z^q(t)} is Cauchy in M_G^2([t_0,T];R^n) thus converges to a limit which is a unique solution to SDE in the G-framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Nunno, Giulia, E-mail: giulian@math.uio.no; Khedher, Asma, E-mail: asma.khedher@tum.de; Vanmaele, Michèle, E-mail: michele.vanmaele@ugent.be
We consider a backward stochastic differential equation with jumps (BSDEJ) which is driven by a Brownian motion and a Poisson random measure. We present two candidate-approximations to this BSDEJ and we prove that the solution of each candidate-approximation converges to the solution of the original BSDEJ in a space which we specify. We use this result to investigate in further detail the consequences of the choice of the model to (partial) hedging in incomplete markets in finance. As an application, we consider models in which the small variations in the price dynamics are modeled with a Poisson random measure withmore » infinite activity and models in which these small variations are modeled with a Brownian motion or are cut off. Using the convergence results on BSDEJs, we show that quadratic hedging strategies are robust towards the approximation of the market prices and we derive an estimation of the model risk.« less
A fuzzy reinforcement learning approach to power control in wireless transmitters.
Vengerov, David; Bambos, Nicholas; Berenji, Hamid R
2005-08-01
We address the issue of power-controlled shared channel access in wireless networks supporting packetized data traffic. We formulate this problem using the dynamic programming framework and present a new distributed fuzzy reinforcement learning algorithm (ACFRL-2) capable of adequately solving a class of problems to which the power control problem belongs. Our experimental results show that the algorithm converges almost deterministically to a neighborhood of optimal parameter values, as opposed to a very noisy stochastic convergence of earlier algorithms. The main tradeoff facing a transmitter is to balance its current power level with future backlog in the presence of stochastically changing interference. Simulation experiments demonstrate that the ACFRL-2 algorithm achieves significant performance gains over the standard power control approach used in CDMA2000. Such a large improvement is explained by the fact that ACFRL-2 allows transmitters to learn implicit coordination policies, which back off under stressful channel conditions as opposed to engaging in escalating "power wars."
NASA Astrophysics Data System (ADS)
Magnen, Jacques; Unterberger, Jérémie
2012-03-01
{Let $B=(B_1(t),...,B_d(t))$ be a $d$-dimensional fractional Brownian motion with Hurst index $\\alpha<1/4$, or more generally a Gaussian process whose paths have the same local regularity. Defining properly iterated integrals of $B$ is a difficult task because of the low H\\"older regularity index of its paths. Yet rough path theory shows it is the key to the construction of a stochastic calculus with respect to $B$, or to solving differential equations driven by $B$. We intend to show in a series of papers how to desingularize iterated integrals by a weak, singular non-Gaussian perturbation of the Gaussian measure defined by a limit in law procedure. Convergence is proved by using "standard" tools of constructive field theory, in particular cluster expansions and renormalization. These powerful tools allow optimal estimates, and call for an extension of Gaussian tools such as for instance the Malliavin calculus. After a first introductory paper \\cite{MagUnt1}, this one concentrates on the details of the constructive proof of convergence for second-order iterated integrals, also known as L\\'evy area.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less
NASA Technical Reports Server (NTRS)
Zak, M.
1998-01-01
Quantum analog computing is based upon similarity between mathematical formalism of quantum mechanics and phenomena to be computed. It exploits a dynamical convergence of several competing phenomena to an attractor which can represent an externum of a function, an image, a solution to a system of ODE, or a stochastic process.
Stochastic Stirling Engine Operating in Contact with Active Baths
NASA Astrophysics Data System (ADS)
Zakine, Ruben; Solon, Alexandre; Gingrich, Todd; van Wijland, Frédéric
2017-04-01
A Stirling engine made of a colloidal particle in contact with a nonequilibrium bath is considered and analyzed with the tools of stochastic energetics. We model the bath by non Gaussian persistent noise acting on the colloidal particle. Depending on the chosen definition of an isothermal transformation in this nonequilibrium setting, we find that either the energetics of the engine parallels that of its equilibrium counterpart or, in the simplest case, that it ends up being less efficient. Persistence, more than non Gaussian effects, are responsible for this result.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiebe, David J.; Carlson, Andrew; Stoker, Kyle C.
A transition duct system for routing a gas flow in a combustion turbine engine is provided. The transition duct system includes one or more converging flow joint inserts forming a trailing edge at an intersection between adjacent transition ducts. The converging flow joint insert may be contained within a converging flow joint insert receiver and may be disconnected from the transition duct bodies by which the converging flow joint insert is positioned. Being disconnected eliminates stress formation within the converging flow joint insert, thereby enhancing the life of the insert. The converging flow joint insert may be removable such thatmore » the insert can be replaced once worn beyond design limits.« less
Stochastic Kuramoto oscillators with discrete phase states.
Jörg, David J
2017-09-01
We present a generalization of the Kuramoto phase oscillator model in which phases advance in discrete phase increments through Poisson processes, rendering both intrinsic oscillations and coupling inherently stochastic. We study the effects of phase discretization on the synchronization and precision properties of the coupled system both analytically and numerically. Remarkably, many key observables such as the steady-state synchrony and the quality of oscillations show distinct extrema while converging to the classical Kuramoto model in the limit of a continuous phase. The phase-discretized model provides a general framework for coupled oscillations in a Markov chain setting.
Stochastic Kuramoto oscillators with discrete phase states
NASA Astrophysics Data System (ADS)
Jörg, David J.
2017-09-01
We present a generalization of the Kuramoto phase oscillator model in which phases advance in discrete phase increments through Poisson processes, rendering both intrinsic oscillations and coupling inherently stochastic. We study the effects of phase discretization on the synchronization and precision properties of the coupled system both analytically and numerically. Remarkably, many key observables such as the steady-state synchrony and the quality of oscillations show distinct extrema while converging to the classical Kuramoto model in the limit of a continuous phase. The phase-discretized model provides a general framework for coupled oscillations in a Markov chain setting.
A study of parameter identification
NASA Technical Reports Server (NTRS)
Herget, C. J.; Patterson, R. E., III
1978-01-01
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.
Adaptive tracking control for a class of stochastic switched systems
NASA Astrophysics Data System (ADS)
Zhang, Hui; Xia, Yuanqing
2018-02-01
The problem of adaptive tracking is considered for a class of stochastic switched systems, in this paper. As preliminaries, the criterion of global asymptotical practical stability in probability is first presented by the aid of common Lyapunov function method. Based on the Lyapunov stability criterion, adaptive backstepping controllers are designed to guarantee that the closed-loop system has a unique global solution, which is globally asymptotically practically stable in probability, and the tracking error in the fourth moment converges to an arbitrarily small neighbourhood of zero. Simulation examples are given to demonstrate the efficiency of the proposed schemes.
Li, Shao-Peng; Cadotte, Marc W; Meiners, Scott J; Pu, Zhichao; Fukami, Tadashi; Jiang, Lin
2016-09-01
Whether plant communities in a given region converge towards a particular stable state during succession has long been debated, but rarely tested at a sufficiently long time scale. By analysing a 50-year continuous study of post-agricultural secondary succession in New Jersey, USA, we show that the extent of community convergence varies with the spatial scale and species abundance classes. At the larger field scale, abundance-based dissimilarities among communities decreased over time, indicating convergence of dominant species, whereas incidence-based dissimilarities showed little temporal tend, indicating no sign of convergence. In contrast, plots within each field diverged in both species composition and abundance. Abundance-based successional rates decreased over time, whereas rare species and herbaceous plants showed little change in temporal turnover rates. Initial abandonment conditions only influenced community structure early in succession. Overall, our findings provide strong evidence for scale and abundance dependence of stochastic and deterministic processes over old-field succession. © 2016 John Wiley & Sons Ltd/CNRS.
Stochastic Multiscale Analysis and Design of Engine Disks
2010-07-28
shown recently to fail when used with data-driven non-linear stochastic input models (KPCA, IsoMap, etc.). Need for scalable exascale computing algorithms Materials Process Design and Control Laboratory Cornell University
A statistical approach to quasi-extinction forecasting.
Holmes, Elizabeth Eli; Sabo, John L; Viscido, Steven Vincent; Fagan, William Fredric
2007-12-01
Forecasting population decline to a certain critical threshold (the quasi-extinction risk) is one of the central objectives of population viability analysis (PVA), and such predictions figure prominently in the decisions of major conservation organizations. In this paper, we argue that accurate forecasting of a population's quasi-extinction risk does not necessarily require knowledge of the underlying biological mechanisms. Because of the stochastic and multiplicative nature of population growth, the ensemble behaviour of population trajectories converges to common statistical forms across a wide variety of stochastic population processes. This paper provides a theoretical basis for this argument. We show that the quasi-extinction surfaces of a variety of complex stochastic population processes (including age-structured, density-dependent and spatially structured populations) can be modelled by a simple stochastic approximation: the stochastic exponential growth process overlaid with Gaussian errors. Using simulated and real data, we show that this model can be estimated with 20-30 years of data and can provide relatively unbiased quasi-extinction risk with confidence intervals considerably smaller than (0,1). This was found to be true even for simulated data derived from some of the noisiest population processes (density-dependent feedback, species interactions and strong age-structure cycling). A key advantage of statistical models is that their parameters and the uncertainty of those parameters can be estimated from time series data using standard statistical methods. In contrast for most species of conservation concern, biologically realistic models must often be specified rather than estimated because of the limited data available for all the various parameters. Biologically realistic models will always have a prominent place in PVA for evaluating specific management options which affect a single segment of a population, a single demographic rate, or different geographic areas. However, for forecasting quasi-extinction risk, statistical models that are based on the convergent statistical properties of population processes offer many advantages over biologically realistic models.
Recursive stochastic effects in valley hybrid inflation
NASA Astrophysics Data System (ADS)
Levasseur, Laurence Perreault; Vennin, Vincent; Brandenberger, Robert
2013-10-01
Hybrid inflation is a two-field model where inflation ends because of a tachyonic instability, the duration of which is determined by stochastic effects and has important observational implications. Making use of the recursive approach to the stochastic formalism presented in [L. P. Levasseur, preceding article, Phys. Rev. D 88, 083537 (2013)], these effects are consistently computed. Through an analysis of backreaction, this method is shown to converge in the valley but points toward an (expected) instability in the waterfall. It is further shown that the quasistationarity of the auxiliary field distribution breaks down in the case of a short-lived waterfall. We find that the typical dispersion of the waterfall field at the critical point is then diminished, thus increasing the duration of the waterfall phase and jeopardizing the possibility of a short transition. Finally, we find that stochastic effects worsen the blue tilt of the curvature perturbations by an O(1) factor when compared with the usual slow-roll contribution.
Precursor processes of human self-initiated action.
Khalighinejad, Nima; Schurger, Aaron; Desantis, Andrea; Zmigrod, Leor; Haggard, Patrick
2018-01-15
A gradual buildup of electrical potential over motor areas precedes self-initiated movements. Recently, such "readiness potentials" (RPs) were attributed to stochastic fluctuations in neural activity. We developed a new experimental paradigm that operationalized self-initiated actions as endogenous 'skip' responses while waiting for target stimuli in a perceptual decision task. We compared these to a block of trials where participants could not choose when to skip, but were instead instructed to skip. Frequency and timing of motor action were therefore balanced across blocks, so that conditions differed only in how the timing of skip decisions was generated. We reasoned that across-trial variability of EEG could carry as much information about the source of skip decisions as the mean RP. EEG variability decreased more markedly prior to self-initiated compared to externally-triggered skip actions. This convergence suggests a consistent preparatory process prior to self-initiated action. A leaky stochastic accumulator model could reproduce this convergence given the additional assumption of a systematic decrease in input noise prior to self-initiated actions. Our results may provide a novel neurophysiological perspective on the topical debate regarding whether self-initiated actions arise from a deterministic neurocognitive process, or from neural stochasticity. We suggest that the key precursor of self-initiated action may manifest as a reduction in neural noise. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Stochastic Computations in Cortical Microcircuit Models
Maass, Wolfgang
2013-01-01
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126
Application of stochastic weighted algorithms to a multidimensional silica particle model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang
2013-09-01
Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associatedmore » majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.« less
Faugeras, Olivier; Touboul, Jonathan; Cessac, Bruno
2008-01-01
We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. PMID:19255631
The influence of Stochastic perturbation of Geotechnical media On Electromagnetic tomography
NASA Astrophysics Data System (ADS)
Song, Lei; Yang, Weihao; Huangsonglei, Jiahui; Li, HaiPeng
2015-04-01
Electromagnetic tomography (CT) are commonly utilized in Civil engineering to detect the structure defects or geological anomalies. CT are generally recognized as a high precision geophysical method and the accuracy of CT are expected to be several centimeters and even to be several millimeters. Then, high frequency antenna with short wavelength are utilized commonly in Civil Engineering. As to the geotechnical media, stochastic perturbation of the EM parameters are inevitably exist in geological scales, in structure scales and in local scales, et al. In those cases, the geometric dimensionings of the target body, the EM wavelength and the accuracy expected might be of the same order. When the high frequency EM wave propagated in the stochastic geotechnical media, the GPR signal would be reflected not only from the target bodies but also from the stochastic perturbation of the background media. To detect the karst caves in dissolution fracture rock, one need to assess the influence of the stochastic distributed dissolution holes and fractures; to detect the void in a concrete structure, one should master the influence of the stochastic distributed stones, et al. In this paper, on the base of stochastic media discrete realizations, the authors try to evaluate quantificationally the influence of the stochastic perturbation of Geotechnical media by Radon/Iradon Transfer through full-combined Monte Carlo numerical simulation. It is found the stochastic noise is related with transfer angle, perturbing strength, angle interval, autocorrelation length, et al. And the quantitative formula of the accuracy of the electromagnetic tomography is also established, which could help on the precision estimation of GPR tomography in stochastic perturbation Geotechnical media. Key words: Stochastic Geotechnical Media; Electromagnetic Tomography; Radon/Iradon Transfer.
Stochastic 3D modeling of Ostwald ripening at ultra-high volume fractions of the coarsening phase
NASA Astrophysics Data System (ADS)
Spettl, A.; Wimmer, R.; Werz, T.; Heinze, M.; Odenbach, S.; Krill, C. E., III; Schmidt, V.
2015-09-01
We present a (dynamic) stochastic simulation model for 3D grain morphologies undergoing a grain coarsening phenomenon known as Ostwald ripening. For low volume fractions of the coarsening phase, the classical LSW theory predicts a power-law evolution of the mean particle size and convergence toward self-similarity of the particle size distribution; experiments suggest that this behavior holds also for high volume fractions. In the present work, we have analyzed 3D images that were recorded in situ over time in semisolid Al-Cu alloys manifesting ultra-high volume fractions of the coarsening (solid) phase. Using this information we developed a stochastic simulation model for the 3D morphology of the coarsening grains at arbitrary time steps. Our stochastic model is based on random Laguerre tessellations and is by definition self-similar—i.e. it depends only on the mean particle diameter, which in turn can be estimated at each point in time. For a given mean diameter, the stochastic model requires only three additional scalar parameters, which influence the distribution of particle sizes and their shapes. An evaluation shows that even with this minimal information the stochastic model yields an excellent representation of the statistical properties of the experimental data.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Colloidal heat engines: a review.
Martínez, Ignacio A; Roldán, Édgar; Dinis, Luis; Rica, Raúl A
2016-12-21
Stochastic heat engines can be built using colloidal particles trapped using optical tweezers. Here we review recent experimental realizations of microscopic heat engines. We first revisit the theoretical framework of stochastic thermodynamics that allows to describe the fluctuating behavior of the energy fluxes that occur at mesoscopic scales, and then discuss recent implementations of the colloidal equivalents to the macroscopic Stirling, Carnot and steam engines. These small-scale motors exhibit unique features in terms of power and efficiency fluctuations that have no equivalent in the macroscopic world. We also consider a second pathway for work extraction from colloidal engines operating between active bacterial reservoirs at different temperatures, which could significantly boost the performance of passive heat engines at the mesoscale. Finally, we provide some guidance on how the work extracted from colloidal heat engines can be used to generate net particle or energy currents, proposing a new generation of experiments with colloidal systems.
A Stochastic Total Least Squares Solution of Adaptive Filtering Problem
Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
Reach a nonlinear consensus for MAS via doubly stochastic quadratic operators
NASA Astrophysics Data System (ADS)
Abdulghafor, Rawad; Turaev, Sherzod; Zeki, Akram; Al-Shaikhli, Imad
2018-06-01
This technical note addresses the new nonlinear protocol class of doubly stochastic quadratic operators (DSQOs) for coordination of consensus problem in multi-agent systems (MAS). We derive the conditions for ensuring that every agent reaches consensus on a desired rate of the group's decision where the group decision value in its agent's initial statuses varies. Besides that, we investigate a nonlinear protocol sub-class of extreme DSQO (EDSQO) to reach a consensus for MAS to a common value with nonlinear low-complexity rules and fast time convergence if the interactions for each agent are not selfish. In addition, to extend the results to reach a consensus and to avoid the selfish case we specify a general class of DSQO for reaching a consensus under any given case of initial states. The case that MAS reach a consensus by DSQO is if each member of the agent group has positive interactions of DSQO (PDSQO) with the others. The convergence of both EDSQO and PDSQO classes is found to be directed towards the centre point. Finally, experimental simulations are given to support the analysis from theoretical aspect.
SMD-based numerical stochastic perturbation theory
NASA Astrophysics Data System (ADS)
Dalla Brida, Mattia; Lüscher, Martin
2017-05-01
The viability of a variant of numerical stochastic perturbation theory, where the Langevin equation is replaced by the SMD algorithm, is examined. In particular, the convergence of the process to a unique stationary state is rigorously established and the use of higher-order symplectic integration schemes is shown to be highly profitable in this context. For illustration, the gradient-flow coupling in finite volume with Schrödinger functional boundary conditions is computed to two-loop (i.e. NNL) order in the SU(3) gauge theory. The scaling behaviour of the algorithm turns out to be rather favourable in this case, which allows the computations to be driven close to the continuum limit.
Conservative bin-to-bin fractional collisions
NASA Astrophysics Data System (ADS)
Martin, Robert
2016-11-01
Particle methods such as direct simulation Monte Carlo (DSMC) and particle-in-cell (PIC) are commonly used to model rarefied kinetic flows for engineering applications because of their ability to efficiently capture non-equilibrium behavior. The primary drawback to these methods relates to the poor convergence properties due to the stochastic nature of the methods which typically rely heavily on high degrees of non-equilibrium and time averaging to compensate for poor signal to noise ratios. For standard implementations, each computational particle represents many physical particles which further exacerbate statistical noise problems for flow with large species density variation such as encountered in flow expansions and chemical reactions. The stochastic weighted particle method (SWPM) introduced by Rjasanow and Wagner overcome this difficulty by allowing the ratio of real to computational particles to vary on a per particle basis throughout the flow. The DSMC procedure must also be slightly modified to properly sample the Boltzmann collision integral accounting for the variable particle weights and to avoid the creation of additional particles with negative weight. In this work, the SWPM with necessary modification to incorporate the variable hard sphere (VHS) collision cross section model commonly used in engineering applications is first incorporated into an existing engineering code, the Thermophysics Universal Research Framework. The results and computational efficiency are compared to a few simple test cases using a standard validated implementation of the DSMC method along with the adapted SWPM/VHS collision using an octree based conservative phase space reconstruction. The SWPM method is then further extended to combine the collision and phase space reconstruction into a single step which avoids the need to create additional computational particles only to destroy them again during the particle merge. This is particularly helpful when oversampling the collision integral when compared to the standard DSMC method. However, it is found that the more frequent phase space reconstructions can cause added numerical thermalization with low particle per cell counts due to the coarseness of the octree used. However, the methods are expected to be of much greater utility in transient expansion flows and chemical reactions in the future.
NASA Astrophysics Data System (ADS)
Zhang, D.; Liao, Q.
2016-12-01
The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of computational efficiency.
Stochastic models for inferring genetic regulation from microarray gene expression data.
Tian, Tianhai
2010-03-01
Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.
Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers
NASA Astrophysics Data System (ADS)
Sendersky, Dmitry
2000-10-01
The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.
Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis
NASA Astrophysics Data System (ADS)
Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.
2014-04-01
A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.
Stochastic inference with spiking neurons in the high-conductance state
NASA Astrophysics Data System (ADS)
Petrovici, Mihai A.; Bill, Johannes; Bytschok, Ilja; Schemmel, Johannes; Meier, Karlheinz
2016-10-01
The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Lorenzen, F.; de Ponte, M. A.; Moussa, M. H. Y.
2009-09-01
In this paper, employing the Itô stochastic Schrödinger equation, we extend Bell’s beable interpretation of quantum mechanics to encompass dissipation, decoherence, and the quantum-to-classical transition through quantum trajectories. For a particular choice of the source of stochasticity, the one leading to a dissipative Lindblad-type correction to the Hamiltonian dynamics, we find that the diffusive terms in Nelsons stochastic trajectories are naturally incorporated into Bohm’s causal dynamics, yielding a unified Bohm-Nelson theory. In particular, by analyzing the interference between quantum trajectories, we clearly identify the decoherence time, as estimated from the quantum formalism. We also observe the quantum-to-classical transition in the convergence of the infinite ensemble of quantum trajectories to their classical counterparts. Finally, we show that our extended beables circumvent the problems in Bohm’s causal dynamics regarding stationary states in quantum mechanics.
Online learning in optical tomography: a stochastic approach
NASA Astrophysics Data System (ADS)
Chen, Ke; Li, Qin; Liu, Jian-Guo
2018-07-01
We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.
Franklin, Nicholas T; Frank, Michael J
2015-12-25
Convergent evidence suggests that the basal ganglia support reinforcement learning by adjusting action values according to reward prediction errors. However, adaptive behavior in stochastic environments requires the consideration of uncertainty to dynamically adjust the learning rate. We consider how cholinergic tonically active interneurons (TANs) may endow the striatum with such a mechanism in computational models spanning three Marr's levels of analysis. In the neural model, TANs modulate the excitability of spiny neurons, their population response to reinforcement, and hence the effective learning rate. Long TAN pauses facilitated robustness to spurious outcomes by increasing divergence in synaptic weights between neurons coding for alternative action values, whereas short TAN pauses facilitated stochastic behavior but increased responsiveness to change-points in outcome contingencies. A feedback control system allowed TAN pauses to be dynamically modulated by uncertainty across the spiny neuron population, allowing the system to self-tune and optimize performance across stochastic environments.
Entropy production of doubly stochastic quantum channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller-Hermes, Alexander, E-mail: muellerh@posteo.net; Department of Mathematical Sciences, University of Copenhagen, 2100 Copenhagen; Stilck França, Daniel, E-mail: dsfranca@mytum.de
2016-02-15
We study the entropy increase of quantum systems evolving under primitive, doubly stochastic Markovian noise and thus converging to the maximally mixed state. This entropy increase can be quantified by a logarithmic-Sobolev constant of the Liouvillian generating the noise. We prove a universal lower bound on this constant that stays invariant under taking tensor-powers. Our methods involve a new comparison method to relate logarithmic-Sobolev constants of different Liouvillians and a technique to compute logarithmic-Sobolev inequalities of Liouvillians with eigenvectors forming a projective representation of a finite abelian group. Our bounds improve upon similar results established before and as an applicationmore » we prove an upper bound on continuous-time quantum capacities. In the last part of this work we study entropy production estimates of discrete-time doubly stochastic quantum channels by extending the framework of discrete-time logarithmic-Sobolev inequalities to the quantum case.« less
Algorithms for accelerated convergence of adaptive PCA.
Chatterjee, C; Kang, Z; Roychowdhury, V P
2000-01-01
We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.
NASA Astrophysics Data System (ADS)
Pozharskiy, Dmitry
In recent years a nonlinear, acoustic metamaterial, named granular crystals, has gained prominence due to its high accessibility, both experimentally and computationally. The observation of a wide range of dynamical phenomena in the system, due to its inherent nonlinearities, has suggested its importance in many engineering applications related to wave propagation. In the first part of this dissertation, we explore the nonlinear dynamics of damped-driven granular crystals. In one case, we consider a highly nonlinear setting, also known as a sonic vacuum, and derive a nonlinear analogue of a linear spectrum, corresponding to resonant periodic propagation and antiresonances. Experimental studies confirm the computational findings and the assimilation of experimental data into a numerical model is demonstrated. In the second case, global bifurcations in a precompressed granular crystal are examined, and their involvement in the appearance of chaotic dynamics is demonstrated. Both results highlight the importance of exploring the nonlinear dynamics, to gain insight into how a granular crystal responds to different external excitations. In the second part, we borrow established ideas from coarse-graining of dynamical systems, and extend them to optimization problems. We combine manifold learning algorithms, such as Diffusion Maps, with stochastic optimization methods, such as Simulated Annealing, and show that we can retrieve an ensemble, of few, important parameters that should be explored in detail. This framework can lead to acceleration of convergence when dealing with complex, high-dimensional optimization, and could potentially be applied to design engineered granular crystals.
NASA Astrophysics Data System (ADS)
Zakynthinaki, M. S.; Stirling, J. R.
2007-01-01
Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.
Yi, Qu; Zhan-ming, Li; Er-chao, Li
2012-11-01
A new fault detection and diagnosis (FDD) problem via the output probability density functions (PDFs) for non-gausian stochastic distribution systems (SDSs) is investigated. The PDFs can be approximated by radial basis functions (RBFs) neural networks. Different from conventional FDD problems, the measured information for FDD is the output stochastic distributions and the stochastic variables involved are not confined to Gaussian ones. A (RBFs) neural network technique is proposed so that the output PDFs can be formulated in terms of the dynamic weighings of the RBFs neural network. In this work, a nonlinear adaptive observer-based fault detection and diagnosis algorithm is presented by introducing the tuning parameter so that the residual is as sensitive as possible to the fault. Stability and Convergency analysis is performed in fault detection and fault diagnosis analysis for the error dynamic system. At last, an illustrated example is given to demonstrate the efficiency of the proposed algorithm, and satisfactory results have been obtained. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-06-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-04-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
Numerical pricing of options using high-order compact finite difference schemes
NASA Astrophysics Data System (ADS)
Tangman, D. Y.; Gopaul, A.; Bhuruth, M.
2008-09-01
We consider high-order compact (HOC) schemes for quasilinear parabolic partial differential equations to discretise the Black-Scholes PDE for the numerical pricing of European and American options. We show that for the heat equation with smooth initial conditions, the HOC schemes attain clear fourth-order convergence but fail if non-smooth payoff conditions are used. To restore the fourth-order convergence, we use a grid stretching that concentrates grid nodes at the strike price for European options. For an American option, an efficient procedure is also described to compute the option price, Greeks and the optimal exercise curve. Comparisons with a fourth-order non-compact scheme are also done. However, fourth-order convergence is not experienced with this strategy. To improve the convergence rate for American options, we discuss the use of a front-fixing transformation with the HOC scheme. We also show that the HOC scheme with grid stretching along the asset price dimension gives accurate numerical solutions for European options under stochastic volatility.
Bachelor's Degree Productivity X-Inefficiency: The Role of State Higher Education Policy
ERIC Educational Resources Information Center
Titus, Marvin A.
2010-01-01
Using stochastic frontier analysis and dynamic fixed-effects panel modeling, this study examines how changes in the x-inefficiency of bachelor's degree production are influenced by changes in state higher education policy. The findings from this research show that increases in need-based state financial aid help to mitigate the convergence among…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pettersson, Per, E-mail: per.pettersson@uib.no; Nordström, Jan, E-mail: jan.nordstrom@liu.se; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2016-02-01
We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimatemore » for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.« less
Engineered Resilient Systems: Knowledge Capture and Transfer
2014-08-29
development, but the work has not progressed significantly. 71 Peter Kall and Stein W. Wallace, Stochastic Programming, John Wiley & Sons, Chichester, 1994...John Wiley and Sons: Hoboken, 2008. Peter Kall and Stein W. Wallace, Stochastic Programming, John Wiley & Sons, Chichester, 1994. Rhodes, D.H., Lamb
The underdamped Brownian duet and stochastic linear irreversible thermodynamics
NASA Astrophysics Data System (ADS)
Proesmans, Karel; Van den Broeck, Christian
2017-10-01
Building on our earlier work [Proesmans et al., Phys. Rev. X 6, 041010 (2016)], we introduce the underdamped Brownian duet as a prototype model of a dissipative system or of a work-to-work engine. Several recent advances from the theory of stochastic thermodynamics are illustrated with explicit analytic calculations and corresponding Langevin simulations. In particular, we discuss the Onsager-Casimir symmetry, the trade-off relations between power, efficiency and dissipation, and stochastic efficiency.
Human Systems Engineering: A Learning Model Designed To Converge Education, Business, and Industry.
ERIC Educational Resources Information Center
Hanson, Karen L.
The Human Systems Engineering (HSE) Model was created to facilitate collaboration among education, business, and industry. It emphasized the role of leaders who converge with others to accomplish their goals while paying attention to the key elements that create successful partnerships. The partnership of XXsys Technologies, Inc., University of…
Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-11-01
This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.
Learning in stochastic neural networks for constraint satisfaction problems
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Adorf, Hans-Martin
1989-01-01
Researchers describe a newly-developed artificial neural network algorithm for solving constraint satisfaction problems (CSPs) which includes a learning component that can significantly improve the performance of the network from run to run. The network, referred to as the Guarded Discrete Stochastic (GDS) network, is based on the discrete Hopfield network but differs from it primarily in that auxiliary networks (guards) are asymmetrically coupled to the main network to enforce certain types of constraints. Although the presence of asymmetric connections implies that the network may not converge, it was found that, for certain classes of problems, the network often quickly converges to find satisfactory solutions when they exist. The network can run efficiently on serial machines and can find solutions to very large problems (e.g., N-queens for N as large as 1024). One advantage of the network architecture is that network connection strengths need not be instantiated when the network is established: they are needed only when a participating neural element transitions from off to on. They have exploited this feature to devise a learning algorithm, based on consistency techniques for discrete CSPs, that updates the network biases and connection strengths and thus improves the network performance.
Alien Genetic Algorithm for Exploration of Search Space
NASA Astrophysics Data System (ADS)
Patel, Narendra; Padhiyar, Nitin
2010-10-01
Genetic Algorithm (GA) is a widely accepted population based stochastic optimization technique used for single and multi objective optimization problems. Various versions of modifications in GA have been proposed in last three decades mainly addressing two issues, namely increasing convergence rate and increasing probability of global minima. While both these. While addressing the first issue, GA tends to converge to a local optima and addressing the second issue corresponds the large computational efforts. Thus, to reduce the contradictory effects of these two aspects, we propose a modification in GA by adding an alien member in the population at every generation. Addition of an Alien member in the current population at every generation increases the probability of obtaining global minima at the same time maintaining higher convergence rate. With two test cases, we have demonstrated the efficacy of the proposed GA by comparing with the conventional GA.
Senecal, P. K.; Pomraning, E.; Anders, J. W.; ...
2014-05-28
A state-of-the-art, grid-convergent simulation methodology was applied to three-dimensional calculations of a single-cylinder optical engine. A mesh resolution study on a sector-based version of the engine geometry further verified the RANS-based cell size recommendations previously presented by Senecal et al. (“Grid Convergent Spray Models for Internal Combustion Engine CFD Simulations,” ASME Paper No. ICEF2012-92043). Convergence of cylinder pressure, flame lift-off length, and emissions was achieved for an adaptive mesh refinement cell size of 0.35 mm. Furthermore, full geometry simulations, using mesh settings derived from the grid convergence study, resulted in excellent agreement with measurements of cylinder pressure, heat release rate,more » and NOx emissions. On the other hand, the full geometry simulations indicated that the flame lift-off length is not converged at 0.35 mm for jets not aligned with the computational mesh. Further simulations suggested that the flame lift-off lengths for both the nonaligned and aligned jets appear to be converged at 0.175 mm. With this increased mesh resolution, both the trends and magnitudes in flame lift-off length were well predicted with the current simulation methodology. Good agreement between the overall predicted flame behavior and the available chemiluminescence measurements was also achieved. Our present study indicates that cell size requirements for accurate prediction of full geometry flame lift-off lengths may be stricter than those for global combustion behavior. This may be important when accurate soot predictions are required.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senecal, P. K.; Pomraning, E.; Anders, J. W.
A state-of-the-art, grid-convergent simulation methodology was applied to three-dimensional calculations of a single-cylinder optical engine. A mesh resolution study on a sector-based version of the engine geometry further verified the RANS-based cell size recommendations previously presented by Senecal et al. (“Grid Convergent Spray Models for Internal Combustion Engine CFD Simulations,” ASME Paper No. ICEF2012-92043). Convergence of cylinder pressure, flame lift-off length, and emissions was achieved for an adaptive mesh refinement cell size of 0.35 mm. Furthermore, full geometry simulations, using mesh settings derived from the grid convergence study, resulted in excellent agreement with measurements of cylinder pressure, heat release rate,more » and NOx emissions. On the other hand, the full geometry simulations indicated that the flame lift-off length is not converged at 0.35 mm for jets not aligned with the computational mesh. Further simulations suggested that the flame lift-off lengths for both the nonaligned and aligned jets appear to be converged at 0.175 mm. With this increased mesh resolution, both the trends and magnitudes in flame lift-off length were well predicted with the current simulation methodology. Good agreement between the overall predicted flame behavior and the available chemiluminescence measurements was also achieved. Our present study indicates that cell size requirements for accurate prediction of full geometry flame lift-off lengths may be stricter than those for global combustion behavior. This may be important when accurate soot predictions are required.« less
Parallel stochastic simulation of macroscopic calcium currents.
González-Vélez, Virginia; González-Vélez, Horacio
2007-06-01
This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.
Applied Nonlinear Dynamics and Stochastic Systems Near The Millenium. Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadtke, J.B.; Bulsara, A.
These proceedings represent papers presented at the Applied Nonlinear Dynamics and Stochastic Systems conference held in San Diego, California in July 1997. The conference emphasized the applications of nonlinear dynamical systems theory in fields as diverse as neuroscience and biomedical engineering, fluid dynamics, chaos control, nonlinear signal/image processing, stochastic resonance, devices and nonlinear dynamics in socio{minus}economic systems. There were 56 papers presented at the conference and 5 have been abstracted for the Energy Science and Technology database.(AIP)
Stochasticity, succession, and environmental perturbations in a fluidic ecosystem.
Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A; Hazen, Terry C; Tiedje, James M; Arkin, Adam P
2014-03-04
Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession.
NASA Astrophysics Data System (ADS)
Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming
2018-04-01
Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application
NASA Astrophysics Data System (ADS)
Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin
2007-12-01
We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.
Asymptotic stability of spectral-based PDF modeling for homogeneous turbulent flows
NASA Astrophysics Data System (ADS)
Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca
2015-11-01
Engineering models of turbulence, based on one-point statistics, neglect spectral information inherent in a turbulence field. It is well known, however, that the evolution of turbulence is dictated by a complex interplay between the spectral modes of velocity. For example, for homogeneous turbulence, the pressure-rate-of-strain depends on the integrated energy spectrum weighted by components of the wave vectors. The Interacting Particle Representation Model (IPRM) (Kassinos & Reynolds, 1996) and the Velocity/Wave-Vector PDF model (Van Slooten & Pope, 1997) emulate spectral information in an attempt to improve the modeling of turbulence. We investigate the evolution and asymptotic stability of the IPRM using three different approaches. The first approach considers the Lagrangian evolution of individual realizations (idealized as particles) of the stochastic process defined by the IPRM. The second solves Lagrangian evolution equations for clusters of realizations conditional on a given wave vector. The third evolves the solution of the Eulerian conditional PDF corresponding to the aforementioned clusters. This last method avoids issues related to discrete particle noise and slow convergence associated with Lagrangian particle-based simulations.
Stochastic stability assessment of a semi-free piston engine generator concept
NASA Astrophysics Data System (ADS)
Kigezi, T. N.; Gonzalez Anaya, J. A.; Dunne, J. F.
2016-09-01
Small engines, as power generators with low-noise and vibration characteristics, are needed in two niche application areas: as electric vehicle range extenders and as domestic micro Combined Heat and Power systems. A recent semi-free piston design known as the AMOCATIC generator fully meets this requirement. The engine potentially allows for high energy conversion efficiencies at resonance derived from having a mass and spring assembly. As with free-piston engines in general, stability and control of piston motion has been cited as the prime challenge limiting the technology's widespread application. Using physical principles, we derive in this paper two important results: an energy balance criterion and a related general stability criterion for a semi-free piston engine. Control is achieved by systematically designing a Proportional Integral (PI) controller using a control-oriented engine model for which a specific stability condition is stated. All results are presented in closed form throughout the paper. Simulation results under stochastic pressure conditions show that the proposed energy balance, stability criterion, and PI controller, operate as predicted to yield stable engine operation at fixed compression ratio.
Far Noise Field of Air Jets and Jet Engines
NASA Technical Reports Server (NTRS)
Callaghan, Edmund E; Coles, Willard D
1957-01-01
An experimental investigation was conducted to study and compare the acoustic radiation of air jets and jet engines. A number of different nozzle-exit shapes were studied with air jets to determine the effect of exit shape on noise generation. Circular, square, rectangular, and elliptical convergent nozzles and convergent-divergent and plug nozzles were investigated. The spectral distributions of the sound power for the engine and the air jet were in good agreement for the case where the engine data were not greatly affected by reflection or jet interference effects. Such power spectra for a subsonic or slightly choked engine or air jet show that the peaks of the spectra occur at a Strouhal number of 0.3.
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-01-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717
Moix, Jeremy M; Ma, Jian; Cao, Jianshu
2015-03-07
A numerically exact path integral treatment of the absorption and emission spectra of open quantum systems is presented that requires only the straightforward solution of a stochastic differential equation. The approach converges rapidly enabling the calculation of spectra of large excitonic systems across the complete range of system parameters and for arbitrary bath spectral densities. With the numerically exact absorption and emission operators, one can also immediately compute energy transfer rates using the multi-chromophoric Förster resonant energy transfer formalism. Benchmark calculations on the emission spectra of two level systems are presented demonstrating the efficacy of the stochastic approach. This is followed by calculations of the energy transfer rates between two weakly coupled dimer systems as a function of temperature and system-bath coupling strength. It is shown that the recently developed hybrid cumulant expansion (see Paper II) is the only perturbative method capable of generating uniformly reliable energy transfer rates and emission spectra across a broad range of system parameters.
Franklin, Nicholas T; Frank, Michael J
2015-01-01
Convergent evidence suggests that the basal ganglia support reinforcement learning by adjusting action values according to reward prediction errors. However, adaptive behavior in stochastic environments requires the consideration of uncertainty to dynamically adjust the learning rate. We consider how cholinergic tonically active interneurons (TANs) may endow the striatum with such a mechanism in computational models spanning three Marr's levels of analysis. In the neural model, TANs modulate the excitability of spiny neurons, their population response to reinforcement, and hence the effective learning rate. Long TAN pauses facilitated robustness to spurious outcomes by increasing divergence in synaptic weights between neurons coding for alternative action values, whereas short TAN pauses facilitated stochastic behavior but increased responsiveness to change-points in outcome contingencies. A feedback control system allowed TAN pauses to be dynamically modulated by uncertainty across the spiny neuron population, allowing the system to self-tune and optimize performance across stochastic environments. DOI: http://dx.doi.org/10.7554/eLife.12029.001 PMID:26705698
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Zachary; Neuert, Gregor; Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232
2016-08-21
Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort.more » In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.« less
Semi-stochastic full configuration interaction quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Holmes, Adam; Petruzielo, Frank; Khadilkar, Mihir; Changlani, Hitesh; Nightingale, M. P.; Umrigar, C. J.
2012-02-01
In the recently proposed full configuration interaction quantum Monte Carlo (FCIQMC) [1,2], the ground state is projected out stochastically, using a population of walkers each of which represents a basis state in the Hilbert space spanned by Slater determinants. The infamous fermion sign problem manifests itself in the fact that walkers of either sign can be spawned on a given determinant. We propose an improvement on this method in the form of a hybrid stochastic/deterministic technique, which we expect will improve the efficiency of the algorithm by ameliorating the sign problem. We test the method on atoms and molecules, e.g., carbon, carbon dimer, N2 molecule, and stretched N2. [4pt] [1] Fermion Monte Carlo without fixed nodes: a Game of Life, death and annihilation in Slater Determinant space. George Booth, Alex Thom, Ali Alavi. J Chem Phys 131, 050106, (2009).[0pt] [2] Survival of the fittest: Accelerating convergence in full configuration-interaction quantum Monte Carlo. Deidre Cleland, George Booth, and Ali Alavi. J Chem Phys 132, 041103 (2010).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braumann, Andreas; Kraft, Markus, E-mail: mk306@cam.ac.u; Wagner, Wolfgang
2010-10-01
This paper is concerned with computational aspects of a multidimensional population balance model of a wet granulation process. Wet granulation is a manufacturing method to form composite particles, granules, from small particles and binders. A detailed numerical study of a stochastic particle algorithm for the solution of a five-dimensional population balance model for wet granulation is presented. Each particle consists of two types of solids (containing pores) and of external and internal liquid (located in the pores). Several transformations of particles are considered, including coalescence, compaction and breakage. A convergence study is performed with respect to the parameter that determinesmore » the number of numerical particles. Averaged properties of the system are computed. In addition, the ensemble is subdivided into practically relevant size classes and analysed with respect to the amount of mass and the particle porosity in each class. These results illustrate the importance of the multidimensional approach. Finally, the kinetic equation corresponding to the stochastic model is discussed.« less
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
The sequence relay selection strategy based on stochastic dynamic programming
NASA Astrophysics Data System (ADS)
Zhu, Rui; Chen, Xihao; Huang, Yangchao
2017-07-01
Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.
Wang, Huanqing; Chen, Bing; Liu, Xiaoping; Liu, Kefu; Lin, Chong
2013-12-01
This paper is concerned with the problem of adaptive fuzzy tracking control for a class of pure-feedback stochastic nonlinear systems with input saturation. To overcome the design difficulty from nondifferential saturation nonlinearity, a smooth nonlinear function of the control input signal is first introduced to approximate the saturation function; then, an adaptive fuzzy tracking controller based on the mean-value theorem is constructed by using backstepping technique. The proposed adaptive fuzzy controller guarantees that all signals in the closed-loop system are bounded in probability and the system output eventually converges to a small neighborhood of the desired reference signal in the sense of mean quartic value. Simulation results further illustrate the effectiveness of the proposed control scheme.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
NASA Technical Reports Server (NTRS)
Sandell, N. R., Jr.; Athans, M.
1975-01-01
The development of the theory of the finite - state, finite - memory (FSFM) stochastic control problem is discussed. The sufficiency of the FSFM minimum principle (which is in general only a necessary condition) was investigated. By introducing the notion of a signaling strategy as defined in the literature on games, conditions under which the FSFM minimum principle is sufficient were determined. This result explicitly interconnects the information structure of the FSFM problem with its optimality conditions. The min-H algorithm for the FSFM problem was studied. It is demonstrated that a version of the algorithm always converges to a particular type of local minimum termed a person - by - person extremal.
Corson, James A.; Erisir, Alev
2014-01-01
While physiological studies suggested convergence of chorda tympani and glossopharyngeal afferent axons onto single neurons of the rostral nucleus of the solitary tract (rNTS), anatomical evidence has been elusive. The current study uses high-magnification confocal microscopy to identify putative synaptic contacts from afferent fibers of the two nerves onto individual projection neurons. Imaged tissue is re-visualized with electron microscopy, confirming that overlapping fluorescent signals in confocal z-stacks accurately identify appositions between labeled terminal and dendrite pairs. Monte Carlo modeling reveals that the probability of overlapping fluorophores is stochastically unrelated to the density of afferent label suggesting that convergent innervation in the rNTS is selective rather than opportunistic. Putative synaptic contacts from each nerve are often compartmentalized onto dendrite segments of convergently innervated neurons. These results have important implications for orosensory processing in the rNTS, and the techniques presented here have applications in investigations of neural microcircuitry with an emphasis on innervation patterning. PMID:23640852
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
2010-11-01
Novembre 2010. Contexte: La puissance des ordinateurs nous permet aujourd’hui d’étudier des problèmes pour lesquels une solution analytique n’existe... 13 4.8 Proof of Corollary........................................................................................................ 13 ...optimal capacities for links. e DRDC CORA TM 2010-249 13 4.9 Example Figure 4 below shows that the probability of achieving the optimal
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations
Toth, Alex; Ellis, J. Austin; Evans, Tom; ...
2017-10-26
Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.
Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Ellis, J. Austin; Evans, Tom
Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.
Optimizer convergence and local minima errors and their clinical importance
NASA Astrophysics Data System (ADS)
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.
2003-09-01
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Optimizer convergence and local minima errors and their clinical importance.
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R
2003-09-07
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
3D aquifer characterization using stochastic streamline calibration
NASA Astrophysics Data System (ADS)
Jang, Minchul
2007-03-01
In this study, a new inverse approach, stochastic streamline calibration is proposed. Using both a streamline concept and a stochastic technique, stochastic streamline calibration optimizes an identified field to fit in given observation data in a exceptionally fast and stable fashion. In the stochastic streamline calibration, streamlines are adopted as basic elements not only for describing fluid flow but also for identifying the permeability distribution. Based on the streamline-based inversion by Agarwal et al. [Agarwal B, Blunt MJ. Streamline-based method with full-physics forward simulation for history matching performance data of a North sea field. SPE J 2003;8(2):171-80], Wang and Kovscek [Wang Y, Kovscek AR. Streamline approach for history matching production data. SPE J 2000;5(4):353-62], permeability is modified rather along streamlines than at the individual gridblocks. Permeabilities in the gridblocks which a streamline passes are adjusted by being multiplied by some factor such that we can match flow and transport properties of the streamline. This enables the inverse process to achieve fast convergence. In addition, equipped with a stochastic module, the proposed technique supportively calibrates the identified field in a stochastic manner, while incorporating spatial information into the field. This prevents the inverse process from being stuck in local minima and helps search for a globally optimized solution. Simulation results indicate that stochastic streamline calibration identifies an unknown permeability exceptionally quickly. More notably, the identified permeability distribution reflected realistic geological features, which had not been achieved in the original work by Agarwal et al. with the limitations of the large modifications along streamlines for matching production data only. The constructed model by stochastic streamline calibration forecasted transport of plume which was similar to that of a reference model. By this, we can expect the proposed approach to be applied to the construction of an aquifer model and forecasting of the aquifer performances of interest.
Teaching Reinforcement of Stochastic Behavior Using Monte Carlo Simulation.
ERIC Educational Resources Information Center
Fox, William P.; And Others
1996-01-01
Explains a proposed block of instruction that would give students in industrial engineering, operations research, systems engineering, and applied mathematics the basic understanding required to begin more advanced courses in simulation theory or applications. (DDR)
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
NASA Astrophysics Data System (ADS)
Zhu, Z. W.; Zhang, W. D.; Xu, J.
2014-03-01
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.
Stochastic processes, estimation theory and image enhancement
NASA Technical Reports Server (NTRS)
Assefi, T.
1978-01-01
An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, Robert T.
A transition duct system (100) for routing a gas flow from a combustor (102) to the first stage (104) of a turbine section (106) in a combustion turbine engine (108), wherein the transition duct system (100) includes one or more converging flow joint inserts (120) forming a trailing edge (122) at an intersection (124) between adjacent transition ducts (126, 128) is disclosed. The transition duct system (100) may include a transition duct (126, 128) having an internal passage (130) extending between an inlet (132, 184) to an outlet (134, 186) and may expel gases into the first stage turbine (104)more » with a tangential component. The converging flow joint insert (120) may be contained within a converging flow joint insert receiver (136) and disconnected from the transition duct bodies (126, 128) by which the converging flow joint insert (120) is positioned. Being disconnected eliminates stress formation within the converging flow joint insert (120), thereby enhancing the life of the insert. The converging flow joint insert (120) may be removable such that the insert (120) can be replaced once worn beyond design limits.« less
NASA Technical Reports Server (NTRS)
Zhu, Dongming; Nemeth, Noel N.
2017-01-01
Advanced environmental barrier coatings will play an increasingly important role in future gas turbine engines because of their ability to protect emerging light-weight SiC/SiC ceramic matrix composite (CMC) engine components, further raising engine operating temperatures and performance. Because the environmental barrier coating systems are critical to the performance, reliability and durability of these hot-section ceramic engine components, a prime-reliant coating system along with established life design methodology are required for the hot-section ceramic component insertion into engine service. In this paper, we have first summarized some observations of high temperature, high-heat-flux environmental degradation and failure mechanisms of environmental barrier coating systems in laboratory simulated engine environment tests. In particular, the coating surface cracking morphologies and associated subsequent delamination mechanisms under the engine level high-heat-flux, combustion steam, and mechanical creep and fatigue loading conditions will be discussed. The EBC compostion and archtechture improvements based on advanced high heat flux environmental testing, and the modeling advances based on the integrated Finite Element Analysis Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program will also be highlighted. The stochastic progressive damage simulation successfully predicts mud flat damage pattern in EBCs on coated 3-D specimens, and a 2-D model of through-the-thickness cross-section. A 2-parameter Weibull distribution was assumed in characterizing the coating layer stochastic strength response and the formation of damage was therefore modeled. The damage initiation and coalescence into progressively smaller mudflat crack cells was demonstrated. A coating life prediction framework may be realized by examining the surface crack initiation and delamination propagation in conjunction with environmental degradation under high-heat-flux and environment load test conditions.
Markov stochasticity coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: iddo.eliazar@intel.com
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Stochasticity, succession, and environmental perturbations in a fluidic ecosystem
Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D.; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A.; Hazen, Terry C.; Tiedje, James M.; Arkin, Adam P.
2014-01-01
Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession. PMID:24550501
Adaptiveness in monotone pseudo-Boolean optimization and stochastic neural computation.
Grossi, Giuliano
2009-08-01
Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph theory.
Genuine non-self-averaging and ultraslow convergence in gelation.
Cho, Y S; Mazza, M G; Kahng, B; Nagler, J
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
Global Well-posedness of the Spatially Homogeneous Kolmogorov-Vicsek Model as a Gradient Flow
NASA Astrophysics Data System (ADS)
Figalli, Alessio; Kang, Moon-Jin; Morales, Javier
2018-03-01
We consider the so-called spatially homogenous Kolmogorov-Vicsek model, a non-linear Fokker-Planck equation of self-driven stochastic particles with orientation interaction under the space-homogeneity. We prove the global existence and uniqueness of weak solutions to the equation. We also show that weak solutions exponentially converge to a steady state, which has the form of the Fisher-von Mises distribution.
Generalized Riemann hypothesis and stochastic time series
NASA Astrophysics Data System (ADS)
Mussardo, Giuseppe; LeClair, André
2018-06-01
Using the Dirichlet theorem on the equidistribution of residue classes modulo q and the Lemke Oliver–Soundararajan conjecture on the distribution of pairs of residues on consecutive primes, we show that the domain of convergence of the infinite product of Dirichlet L-functions of non-principal characters can be extended from down to , without encountering any zeros before reaching this critical line. The possibility of doing so can be traced back to a universal diffusive random walk behavior of a series C N over the primes which underlies the convergence of the infinite product of the Dirichlet functions. The series C N presents several aspects in common with stochastic time series and its control requires to address a problem similar to the single Brownian trajectory problem in statistical mechanics. In the case of the Dirichlet functions of non principal characters, we show that this problem can be solved in terms of a self-averaging procedure based on an ensemble of block variables computed on extended intervals of primes. Those intervals, called inertial intervals, ensure the ergodicity and stationarity of the time series underlying the quantity C N . The infinity of primes also ensures the absence of rare events which would have been responsible for a different scaling behavior than the universal law of the random walks.
Stochastic Simulation Tool for Aerospace Structural Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F.; Moore, David F.
2006-01-01
Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.
O the Derivation of the Schroedinger Equation from Stochastic Mechanics.
NASA Astrophysics Data System (ADS)
Wallstrom, Timothy Clarke
The thesis is divided into four largely independent chapters. The first three chapters treat mathematical problems in the theory of stochastic mechanics. The fourth chapter deals with stochastic mechanisms as a physical theory and shows that the Schrodinger equation cannot be derived from existing formulations of stochastic mechanics, as had previously been believed. Since the drift coefficients of stochastic mechanical diffusions are undefined on the nodes, or zeros of the density, an important problem has been to show that the sample paths stay away from the nodes. In Chapter 1, it is shown that for a smooth wavefunction, the closest approach to the nodes can be bounded solely in terms of the time -integrated energy. The ergodic properties of stochastic mechanical diffusions are greatly complicated by the tendency of the particles to avoid the nodes. In Chapter 2, it is shown that a sufficient condition for a stationary process to be ergodic is that there exist positive t and c such that for all x and y, p^{t} (x,y) > cp(y), and this result is applied to show that the set of spin-1over2 diffusions is uniformly ergodic. In stochastic mechanics, the Bopp-Haag-Dankel diffusions on IR^3times SO(3) are used to represent particles with spin. Nelson has conjectured that in the limit as the particle's moment of inertia I goes to zero, the projections of the Bopp -Haag-Dankel diffusions onto IR^3 converge to a Markovian limit process. This conjecture is proved for the spin-1over2 case in Chapter 3, and the limit process identified as the diffusion naturally associated with the solution to the regular Pauli equation. In Chapter 4 it is shown that the general solution of the stochastic Newton equation does not correspond to a solution of the Schrodinger equation, and that there are solutions to the Schrodinger equation which do not satisfy the Guerra-Morato Lagrangian variational principle. These observations are shown to apply equally to other existing formulations of stochastic mechanics, and it is argued that these difficulties represent fundamental inadequacies in the physical foundation of stochastic mechanics.
On the structure of the master equation for a two-level system coupled to a thermal bath
NASA Astrophysics Data System (ADS)
de Vega, Inés
2015-04-01
We derive a master equation from the exact stochastic Liouville-von-Neumann (SLN) equation (Stockburger and Grabert 2002 Phys. Rev. Lett. 88 170407). The latter depends on two correlated noises and describes exactly the dynamics of an oscillator (which can be either harmonic or present an anharmonicity) coupled to an environment at thermal equilibrium. The newly derived master equation is obtained by performing analytically the average over different noise trajectories. It is found to have a complex hierarchical structure that might be helpful to explain the convergence problems occurring when performing numerically the stochastic average of trajectories given by the SLN equation (Koch et al 2008 Phys. Rev. Lett. 100 230402, Koch 2010 PhD thesis Fakultät Mathematik und Naturwissenschaften der Technischen Universitat Dresden).
NASA Astrophysics Data System (ADS)
He, Shaobo; Banerjee, Santo
2018-07-01
A fractional-order SIR epidemic model is proposed under the influence of both parametric seasonality and the external noise. The integer order SIR epidemic model originally is stable. By introducing seasonality and noise force to the model, behaviors of the system is changed. It is shown that the system has rich dynamical behaviors with different system parameters, fractional derivative order and the degree of seasonality and noise. Complexity of the stochastic model is investigated by using multi-scale fuzzy entropy. Finally, hard limiter controlled system is designed and simulation results show the ratio of infected individuals can converge to a small enough target ρ, which means the epidemic outbreak can be under control by the implementation of some effective medical and health measures.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons
NASA Astrophysics Data System (ADS)
Costa, Ariadne; Brochini, Ludmila; Kinouchi, Osame
2017-08-01
Networks of stochastic spiking neurons are interesting models in the area of Theoretical Neuroscience, presenting both continuous and discontinuous phase transitions. Here we study fully connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality or SOSC) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and dragon king avalanches. We also find that neuronal gains can produce collective oscillations that coexists with neuronal avalanches, with frequencies compatible with characteristic brain rhythms.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
A non-linear dimension reduction methodology for generating data-driven stochastic input models
NASA Astrophysics Data System (ADS)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
2008-06-01
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.
Galindo-Murillo, Rodrigo; Roe, Daniel R; Cheatham, Thomas E
2015-05-01
The structure and dynamics of DNA are critically related to its function. Molecular dynamics simulations augment experiment by providing detailed information about the atomic motions. However, to date the simulations have not been long enough for convergence of the dynamics and structural properties of DNA. Molecular dynamics simulations performed with AMBER using the ff99SB force field with the parmbsc0 modifications, including ensembles of independent simulations, were compared to long timescale molecular dynamics performed with the specialized Anton MD engine on the B-DNA structure d(GCACGAACGAACGAACGC). To assess convergence, the decay of the average RMSD values over longer and longer time intervals was evaluated in addition to assessing convergence of the dynamics via the Kullback-Leibler divergence of principal component projection histograms. These molecular dynamics simulations-including one of the longest simulations of DNA published to date at ~44μs-surprisingly suggest that the structure and dynamics of the DNA helix, neglecting the terminal base pairs, are essentially fully converged on the ~1-5μs timescale. We can now reproducibly converge the structure and dynamics of B-DNA helices, omitting the terminal base pairs, on the μs time scale with both the AMBER and CHARMM C36 nucleic acid force fields. Results from independent ensembles of simulations starting from different initial conditions, when aggregated, match the results from long timescale simulations on the specialized Anton MD engine. With access to large-scale GPU resources or the specialized MD engine "Anton" it is possible for a variety of molecular systems to reproducibly and reliably converge the conformational ensemble of sampled structures. This article is part of a Special Issue entitled: Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
Galindo-Murillo, Rodrigo; Roe, Daniel R.; Cheatham, Thomas E.
2014-01-01
Background The structure and dynamics of DNA are critically related to its function. Molecular dynamics (MD) simulations augment experiment by providing detailed information about the atomic motions. However, to date the simulations have not been long enough for convergence of the dynamics and structural properties of DNA. Methods MD simulations performed with AMBER using the ff99SB force field with the parmbsc0 modifications, including ensembles of independent simulations, were compared to long timescale MD performed with the specialized Anton MD engine on the B-DNA structure d(GCACGAACGAACGAACGC). To assess convergence, the decay of the average RMSD values over longer and longer time intervals was evaluated in addition to assessing convergence of the dynamics via the Kullback-Leibler divergence of principal component projection histograms. Results These MD simulations —including one of the longest simulations of DNA published to date at ~44 μs—surprisingly suggest that the structure and dynamics of the DNA helix, neglecting the terminal base pairs, are essentially fully converged on the ~1–5 μs timescale. Conclusions We can now reproducibly converge the structure and dynamics of B-DNA helices, omitting the terminal base pairs, on the μs time scale with both the AMBER and CHARMM C36 nucleic acid force fields. Results from independent ensembles of simulations starting from different initial conditions, when aggregated, match the results from long timescale simulations on the specialized Anton MD engine. General Significance With access to large-scale GPU resources or the specialized MD engine “Anton” it is possibly for a variety of molecular systems to reproducibly and reliably converge the conformational ensemble of sampled structures. PMID:25219455
Phase transition to a two-peak phase in an information-cascade voting experiment
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato; Takahashi, Taiki
2012-08-01
Observational learning is an important information aggregation mechanism. However, it occasionally leads to a state in which an entire population chooses a suboptimal option. When this occurs and whether it is a phase transition remain unanswered. To address these questions we perform a voting experiment in which subjects answer a two-choice quiz sequentially with and without information about the prior subjects’ choices. The subjects who could copy others are called herders. We obtain a microscopic rule regarding how herders copy others. Varying the ratio of herders leads to qualitative changes in the macroscopic behavior of about 50 subjects in the experiment. If the ratio is small, the sequence of choices rapidly converges to the correct one. As the ratio approaches 100%, convergence becomes extremely slow and information aggregation almost terminates. A simulation study of a stochastic model for 106 subjects based on the herder’s microscopic rule shows a phase transition to the two-peak phase, where the convergence completely terminates as the ratio exceeds some critical value.
Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan
2016-12-28
The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com
2014-03-15
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less
NASA Astrophysics Data System (ADS)
Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro
2016-04-01
We introduce a new class of stochastic processes in
A variational method for analyzing limit cycle oscillations in stochastic hybrid systems
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; MacLaurin, James
2018-06-01
Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .
NASA Astrophysics Data System (ADS)
Liu, Jian; Ruan, Xiaoe
2017-07-01
This paper develops two kinds of derivative-type networked iterative learning control (NILC) schemes for repetitive discrete-time systems with stochastic communication delay occurred in input and output channels and modelled as 0-1 Bernoulli-type stochastic variable. In the two schemes, the delayed signal of the current control input is replaced by the synchronous input utilised at the previous iteration, whilst for the delayed signal of the system output the one scheme substitutes it by the synchronous predetermined desired trajectory and the other takes it by the synchronous output at the previous operation, respectively. In virtue of the mathematical expectation, the tracking performance is analysed which exhibits that for both the linear time-invariant and nonlinear affine systems the two kinds of NILCs are convergent under the assumptions that the probabilities of communication delays are adequately constrained and the product of the input-output coupling matrices is full-column rank. Last, two illustrative examples are presented to demonstrate the effectiveness and validity of the proposed NILC schemes.
Experimental and numerical analysis of convergent nozzlex
NASA Astrophysics Data System (ADS)
Srinivas, G.; Rakham, Bhupal
2017-05-01
In this paper the main focus was given to convergent nozzle where both the experimental and numerical calculations were carried out with the support of standardized literature. In the recent years the field of air breathing and non-air breathing engine developments significantly increase its performance. To enhance the performance of both the type of engines the nozzle is the one of the component which will play a vital role, especially selecting the type of nozzle depends upon the vehicle speed requirement and aerodynamic behavior at most important in the field of propulsion. The convergent nozzle flow experimental analysis done using scaled apparatus and the similar setup was arranged artificially in the ANSYS software for doing the flow analysis across the convergent nozzle. The consistent calculation analysis are done based on the public literature survey to validate the experimental and numerical simulation results of convergent nozzle. Using these two experimental and numerical simulation approaches the best fit results will bring up to meet the design requirements. However the comparison also made to meet the reliability of the work on design criteria of convergent nozzle which can entrench in the field of propulsion applications.
Stochasticity in materials structure, properties, and processing—A review
NASA Astrophysics Data System (ADS)
Hull, Robert; Keblinski, Pawel; Lewis, Dan; Maniatty, Antoinette; Meunier, Vincent; Oberai, Assad A.; Picu, Catalin R.; Samuel, Johnson; Shephard, Mark S.; Tomozawa, Minoru; Vashishth, Deepak; Zhang, Shengbai
2018-03-01
We review the concept of stochasticity—i.e., unpredictable or uncontrolled fluctuations in structure, chemistry, or kinetic processes—in materials. We first define six broad classes of stochasticity: equilibrium (thermodynamic) fluctuations; structural/compositional fluctuations; kinetic fluctuations; frustration and degeneracy; imprecision in measurements; and stochasticity in modeling and simulation. In this review, we focus on the first four classes that are inherent to materials phenomena. We next develop a mathematical framework for describing materials stochasticity and then show how it can be broadly applied to these four materials-related stochastic classes. In subsequent sections, we describe structural and compositional fluctuations at small length scales that modify material properties and behavior at larger length scales; systems with engineered fluctuations, concentrating primarily on composite materials; systems in which stochasticity is developed through nucleation and kinetic phenomena; and configurations in which constraints in a given system prevent it from attaining its ground state and cause it to attain several, equally likely (degenerate) states. We next describe how stochasticity in these processes results in variations in physical properties and how these variations are then accentuated by—or amplify—stochasticity in processing and manufacturing procedures. In summary, the origins of materials stochasticity, the degree to which it can be predicted and/or controlled, and the possibility of using stochastic descriptions of materials structure, properties, and processing as a new degree of freedom in materials design are described.
Efficiency and large deviations in time-asymmetric stochastic heat engines
Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...
2014-10-24
In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less
Distributed Time Synchronization Algorithms and Opinion Dynamics
NASA Astrophysics Data System (ADS)
Manita, Anatoly; Manita, Larisa
2018-01-01
We propose new deterministic and stochastic models for synchronization of clocks in nodes of distributed networks. An external accurate time server is used to ensure convergence of the node clocks to the exact time. These systems have much in common with mathematical models of opinion formation in multiagent systems. There is a direct analogy between the time server/node clocks pair in asynchronous networks and the leader/follower pair in the context of social network models.
Active stability augmentation of large space structures: A stochastic control problem
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1987-01-01
A problem in SCOLE is that of slewing an offset antenna on a long flexible beam-like truss attached to the space shuttle, with rather stringent pointing accuracy requirements. The relevant methodology aspects in robust feedback-control design for stability augmentation of the beam using on-board sensors is examined. It is framed as a stochastic control problem, boundary control of a distributed parameter system described by partial differential equations. While the framework is mathematical, the emphasis is still on an engineering solution. An abstract mathematical formulation is developed as a nonlinear wave equation in a Hilbert space. That the system is controllable is shown and a feedback control law that is robust in the sense that it does not require quantitative knowledge of system parameters is developed. The stochastic control problem that arises in instrumenting this law using appropriate sensors is treated. Using an engineering first approximation which is valid for small damping, formulas for optimal choice of the control gain are developed.
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
An experimental analysis on OSPF-TE convergence time
NASA Astrophysics Data System (ADS)
Huang, S.; Kitayama, K.; Cugini, F.; Paolucci, F.; Giorgetti, A.; Valcarenghi, L.; Castoldi, P.
2008-11-01
Open shortest path first (OSPF) protocol is commonly used as an interior gateway protocol (IGP) in MPLS and generalized MPLS (GMPLS) networks to determine the topology over which label-switched paths (LSPs) can be established. Traffic-engineering extensions (network states such as link bandwidth information, available wavelengths, signal quality, etc) have been recently enabled in OSPF (henceforth, called OSPF-TE) to support shortest path first (SPF) tree calculation upon different purposes, thus possibly achieving optimal path computation and helping improve resource utilization efficiency. Adding these features into routing phase can exploit the OSPF robustness, and no additional network component is required to manage the traffic-engineering information. However, this traffic-engineering enhancement also complicates OSPF behavior. Since network states change frequently upon the dynamic trafficengineered LSP setup and release, the network is easily driven from a stable state to unstable operating regimes. In this paper, we focus on studying the OSPF-TE stability in terms of convergence time. Convergence time is referred to the time spent by the network to go back to steady states upon any network state change. An external observation method (based on black-box method) is employed to estimate the convergence time. Several experimental test-beds are developed to emulate dynamic LSP setup/release, re-routing upon single-link failure. The experimental results show that with OSPF-TE the network requires more time to converge compared to the conventional OSPF protocol without TE extension. Especially, in case of wavelength-routed optical network (WRON), introducing per wavelength availability and wavelength continuity constraint to OSPF-TE suffers severe convergence time and a large number of advertised link state advertisements (LSAs). Our study implies that long convergence time and large number of LSAs flooded in the network might cause scalability problems in OSPF-TE and impose limitations on OSPF-TE applications. New solutions to mitigate the s convergence time and to reduce the amount of state information are desired in the future.
Noise can speed convergence in Markov chains.
Franzke, Brandon; Kosko, Bart
2011-10-01
A new theorem shows that noise can speed convergence to equilibrium in discrete finite-state Markov chains. The noise applies to the state density and helps the Markov chain explore improbable regions of the state space. The theorem ensures that a stochastic-resonance noise benefit exists for states that obey a vector-norm inequality. Such noise leads to faster convergence because the noise reduces the norm components. A corollary shows that a noise benefit still occurs if the system states obey an alternate norm inequality. This leads to a noise-benefit algorithm that requires knowledge of the steady state. An alternative blind algorithm uses only past state information to achieve a weaker noise benefit. Simulations illustrate the predicted noise benefits in three well-known Markov models. The first model is a two-parameter Ehrenfest diffusion model that shows how noise benefits can occur in the class of birth-death processes. The second model is a Wright-Fisher model of genotype drift in population genetics. The third model is a chemical reaction network of zeolite crystallization. A fourth simulation shows a convergence rate increase of 64% for states that satisfy the theorem and an increase of 53% for states that satisfy the corollary. A final simulation shows that even suboptimal noise can speed convergence if the noise applies over successive time cycles. Noise benefits tend to be sharpest in Markov models that do not converge quickly and that do not have strong absorbing states.
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Karaman, Abdullah
2017-04-01
We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.
On the use of reverse Brownian motion to accelerate hybrid simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakarji, Joseph; Tartakovsky, Daniel M., E-mail: tartakovsky@stanford.edu
Multiscale and multiphysics simulations are two rapidly developing fields of scientific computing. Efficient coupling of continuum (deterministic or stochastic) constitutive solvers with their discrete (stochastic, particle-based) counterparts is a common challenge in both kinds of simulations. We focus on interfacial, tightly coupled simulations of diffusion that combine continuum and particle-based solvers. The latter employs the reverse Brownian motion (rBm), a Monte Carlo approach that allows one to enforce inhomogeneous Dirichlet, Neumann, or Robin boundary conditions and is trivially parallelizable. We discuss numerical approaches for improving the accuracy of rBm in the presence of inhomogeneous Neumann boundary conditions and alternative strategiesmore » for coupling the rBm solver with its continuum counterpart. Numerical experiments are used to investigate the convergence, stability, and computational efficiency of the proposed hybrid algorithm.« less
NASA Astrophysics Data System (ADS)
Gu, Zhou; Fei, Shumin; Yue, Dong; Tian, Engang
2014-07-01
This paper deals with the problem of H∞ filtering for discrete-time systems with stochastic missing measurements. A new missing measurement model is developed by decomposing the interval of the missing rate into several segments. The probability of the missing rate in each subsegment is governed by its corresponding random variables. We aim to design a linear full-order filter such that the estimation error converges to zero exponentially in the mean square with a less conservatism while the disturbance rejection attenuation is constrained to a given level by means of an H∞ performance index. Based on Lyapunov theory, the reliable filter parameters are characterised in terms of the feasibility of a set of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach.
Gradient-based stochastic estimation of the density matrix
NASA Astrophysics Data System (ADS)
Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton
2018-03-01
Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.
A homotopy analysis method for the nonlinear partial differential equations arising in engineering
NASA Astrophysics Data System (ADS)
Hariharan, G.
2017-05-01
In this article, we have established the homotopy analysis method (HAM) for solving a few partial differential equations arising in engineering. This technique provides the solutions in rapid convergence series with computable terms for the problems with high degree of nonlinear terms appearing in the governing differential equations. The convergence analysis of the proposed method is also discussed. Finally, we have given some illustrative examples to demonstrate the validity and applicability of the proposed method.
Stochastic detection of enantiomers.
Kang, Xiao-Feng; Cheley, Stephen; Guan, Xiyun; Bayley, Hagan
2006-08-23
The rapid quantification of the enantiomers of small chiral molecules is very important, notably in pharmacology. Here, we show that the enantiomers of drug molecules can be distinguished by stochastic sensing, a single-molecule detection technique. The sensing element is an engineered alpha-hemolysin protein pore, fitted with a beta-cyclodextrin adapter. By using the approach, the enantiomeric composition of samples of ibuprofen and thalidomide can be determined in less than 1 s.
Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani
2015-03-01
In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Breakdown of the reaction-diffusion master equation with nonelementary rates
NASA Astrophysics Data System (ADS)
Smith, Stephen; Grima, Ramon
2016-05-01
The chemical master equation (CME) is the exact mathematical formulation of chemical reactions occurring in a dilute and well-mixed volume. The reaction-diffusion master equation (RDME) is a stochastic description of reaction-diffusion processes on a spatial lattice, assuming well mixing only on the length scale of the lattice. It is clear that, for the sake of consistency, the solution of the RDME of a chemical system should converge to the solution of the CME of the same system in the limit of fast diffusion: Indeed, this has been tacitly assumed in most literature concerning the RDME. We show that, in the limit of fast diffusion, the RDME indeed converges to a master equation but not necessarily the CME. We introduce a class of propensity functions, such that if the RDME has propensities exclusively of this class, then the RDME converges to the CME of the same system, whereas if the RDME has propensities not in this class, then convergence is not guaranteed. These are revealed to be elementary and nonelementary propensities, respectively. We also show that independent of the type of propensity, the RDME converges to the CME in the simultaneous limit of fast diffusion and large volumes. We illustrate our results with some simple example systems and argue that the RDME cannot generally be an accurate description of systems with nonelementary rates.
Numerical methods for stochastic differential equations
NASA Astrophysics Data System (ADS)
Kloeden, Peter; Platen, Eckhard
1991-06-01
The numerical analysis of stochastic differential equations differs significantly from that of ordinary differential equations due to the peculiarities of stochastic calculus. This book provides an introduction to stochastic calculus and stochastic differential equations, both theory and applications. The main emphasise is placed on the numerical methods needed to solve such equations. It assumes an undergraduate background in mathematical methods typical of engineers and physicists, through many chapters begin with a descriptive summary which may be accessible to others who only require numerical recipes. To help the reader develop an intuitive understanding of the underlying mathematicals and hand-on numerical skills exercises and over 100 PC Exercises (PC-personal computer) are included. The stochastic Taylor expansion provides the key tool for the systematic derivation and investigation of discrete time numerical methods for stochastic differential equations. The book presents many new results on higher order methods for strong sample path approximations and for weak functional approximations, including implicit, predictor-corrector, extrapolation and variance-reduction methods. Besides serving as a basic text on such methods. the book offers the reader ready access to a large number of potential research problems in a field that is just beginning to expand rapidly and is widely applicable.
DOT National Transportation Integrated Search
2016-01-01
Managers and engineers at the Federal Highway Administration (FHWA) and State Departments of Transportation (DOTs) indicate that they need researchers, employees, consultants, and regulators who understand the unique challenges involved in managing a...
Computational Methods for Predictive Simulation of Stochastic Turbulence Systems
2015-11-05
Science and Engineering, Venice , Italy, May 18-20, 2015, pp. 1261-1272. [21] Yong Li and P.D. Williams Analysis of the RAW Filter in Composite-Tendency...leapfrog scheme, Proceedings of the VI Conference on Computational Methods for Coupled Problems in Science and Engineering, Venice , Italy, May 18-20
On the impact of a refined stochastic model for airborne LiDAR measurements
NASA Astrophysics Data System (ADS)
Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig
2016-09-01
Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.
Kernel-Based Approximate Dynamic Programming Using Bellman Residual Elimination
2010-02-01
framework is the ability to utilize stochastic system models, thereby allowing the system to make sound decisions even if there is randomness in the system ...approximate policy when a system model is unavailable. We present theoretical analysis of all BRE algorithms proving convergence to the optimal policy in...policies based on MDPs is that there may be parameters of the system model that are poorly known and/or vary with time as the system operates. System
Bustamante, Carlos D.; Valero-Cuevas, Francisco J.
2010-01-01
The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906
Stochastic Models for Laser Propagation in Atmospheric Turbulence.
NASA Astrophysics Data System (ADS)
Leland, Robert Patton
In this dissertation, stochastic models for laser propagation in atmospheric turbulence are considered. A review of the existing literature on laser propagation in the atmosphere and white noise theory is presented, with a view toward relating the white noise integral and Ito integral approaches. The laser beam intensity is considered as the solution to a random Schroedinger equation, or forward scattering equation. This model is formulated in a Hilbert space context as an abstract bilinear system with a multiplicative white noise input, as in the literature. The model is also modeled in the Banach space of Fresnel class functions to allow the plane wave case and the application of path integrals. Approximate solutions to the Schroedinger equation of the Trotter-Kato product form are shown to converge for each white noise sample path. The product forms are shown to be physical random variables, allowing an Ito integral representation. The corresponding Ito integrals are shown to converge in mean square, providing a white noise basis for the Stratonovich correction term associated with this equation. Product form solutions for Ornstein -Uhlenbeck process inputs were shown to converge in mean square as the input bandwidth was expanded. A digital simulation of laser propagation in strong turbulence was used to study properties of the beam. Empirical distributions for the irradiance function were estimated from simulated data, and the log-normal and Rice-Nakagami distributions predicted by the classical perturbation methods were seen to be inadequate. A gamma distribution fit the simulated irradiance distribution well in the vicinity of the boresight. Statistics of the beam were seen to converge rapidly as the bandwidth of an Ornstein-Uhlenbeck process was expanded to its white noise limit. Individual trajectories of the beam were presented to illustrate the distortion and bending of the beam due to turbulence. Feynman path integrals were used to calculate an approximate expression for the mean of the beam intensity without using the Markov, or white noise, assumption, and to relate local variations in the turbulence field to the behavior of the beam by means of two approximations.
Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckerle, Wayne; Rutland, Chris; Rohlfing, Eric
This report is based on a SC/EERE Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE), held March 3, 2011, to determine strategic focus areas that will accelerate innovation in engine design to meet national goals in transportation efficiency. The U.S. has reached a pivotal moment when pressures of energy security, climate change, and economic competitiveness converge. Oil prices remain volatile and have exceeded $100 per barrel twice in five years. At these prices, the U.S. spends $1 billion per day on imported oil to meet our energy demands. Because the transportation sector accountsmore » for two-thirds of our petroleum use, energy security is deeply entangled with our transportation needs. At the same time, transportation produces one-quarter of the nation’s carbon dioxide output. Increasing the efficiency of internal combustion engines is a technologically proven and cost-effective approach to dramatically improving the fuel economy of the nation’s fleet of vehicles in the near- to mid-term, with the corresponding benefits of reducing our dependence on foreign oil and reducing carbon emissions. Because of their relatively low cost, high performance, and ability to utilize renewable fuels, internal combustion engines—including those in hybrid vehicles—will continue to be critical to our transportation infrastructure for decades. Achievable advances in engine technology can improve the fuel economy of automobiles by over 50% and trucks by over 30%. Achieving these goals will require the transportation sector to compress its product development cycle for cleaner, more efficient engine technologies by 50% while simultaneously exploring innovative design space. Concurrently, fuels will also be evolving, adding another layer of complexity and further highlighting the need for efficient product development cycles. Current design processes, using “build and test” prototype engineering, will not suffice. Current market penetration of new engine technologies is simply too slow—it must be dramatically accelerated. These challenges present a unique opportunity to marshal U.S. leadership in science-based simulation to develop predictive computational design tools for use by the transportation industry. The use of predictive simulation tools for enhancing combustion engine performance will shrink engine development timescales, accelerate time to market, and reduce development costs, while ensuring the timely achievement of energy security and emissions targets and enhancing U.S. industrial competitiveness. In 2007 Cummins achieved a milestone in engine design by bringing a diesel engine to market solely with computer modeling and analysis tools. The only testing was after the fact to confirm performance. Cummins achieved a reduction in development time and cost. As important, they realized a more robust design, improved fuel economy, and met all environmental and customer constraints. This important first step demonstrates the potential for computational engine design. But, the daunting complexity of engine combustion and the revolutionary increases in efficiency needed require the development of simulation codes and computation platforms far more advanced than those available today. Based on these needs, a Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE) convened over 60 U.S. leaders in the engine combustion field from industry, academia, and national laboratories to focus on two critical areas of advanced simulation, as identified by the U.S. automotive and engine industries. First, modern engines require precise control of the injection of a broad variety of fuels that is far more subtle than achievable to date and that can be obtained only through predictive modeling and simulation. Second, the simulation, understanding, and control of these stochastic in-cylinder combustion processes lie on the critical path to realizing more efficient engines with greater power density. Fuel sprays set the initial conditions for combustion in essentially all future transportation engines; yet today designers primarily use empirical methods that limit the efficiency achievable. Three primary spray topics were identified as focus areas in the workshop: The fuel delivery system, which includes fuel manifolds and internal injector flow, The multi-phase fuel–air mixing in the combustion chamber of the engine, and The heat transfer and fluid interactions with cylinder walls. Current understanding and modeling capability of stochastic processes in engines remains limited and prevents designers from achieving significantly higher fuel economy. To improve this situation, the workshop participants identified three focus areas for stochastic processes: Improve fundamental understanding that will help to establish and characterize the physical causes of stochastic events, Develop physics-based simulation models that are accurate and sensitive enough to capture performance-limiting variability, and Quantify and manage uncertainty in model parameters and boundary conditions. Improved models and understanding in these areas will allow designers to develop engines with reduced design margins and that operate reliably in more efficient regimes. All of these areas require improved basic understanding, high-fidelity model development, and rigorous model validation. These advances will greatly reduce the uncertainties in current models and improve understanding of sprays and fuel–air mixture preparation that limit the investigation and development of advanced combustion technologies. The two strategic focus areas have distinctive characteristics but are inherently coupled. Coordinated activities in basic experiments, fundamental simulations, and engineering-level model development and validation can be used to successfully address all of the topics identified in the PreSICE workshop. The outcome will be: New and deeper understanding of the relevant fundamental physical and chemical processes in advanced combustion technologies, Implementation of this understanding into models and simulation tools appropriate for both exploration and design, and Sufficient validation with uncertainty quantification to provide confidence in the simulation results. These outcomes will provide the design tools for industry to reduce development time by up to 30% and improve engine efficiencies by 30% to 50%. The improved efficiencies applied to the national mix of transportation applications have the potential to save over 5 million barrels of oil per day, a current cost savings of $500 million per day.« less
Vadose zone flow convergence test suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butcher, B. T.
Performance Assessment (PA) simulations for engineered disposal systems at the Savannah River Site involve highly contrasting materials and moisture conditions at and near saturation. These conditions cause severe convergence difficulties that typically result in unacceptable convergence or long simulation times or excessive analyst effort. Adequate convergence is usually achieved in a trial-anderror manner by applying under-relaxation to the Saturation or Pressure variable, in a series of everdecreasing RELAxation values. SRNL would like a more efficient scheme implemented inside PORFLOW to achieve flow convergence in a more reliable and efficient manner. To this end, a suite of test problems that illustratemore » these convergence problems is provided to facilitate diagnosis and development of an improved convergence strategy. The attached files are being transmitted to you describing the test problem and proposed resolution.« less
Emergence of dynamic cooperativity in the stochastic kinetics of fluctuating enzymes
NASA Astrophysics Data System (ADS)
Kumar, Ashutosh; Chatterjee, Sambarta; Nandi, Mintu; Dua, Arti
2016-08-01
Dynamic co-operativity in monomeric enzymes is characterized in terms of a non-Michaelis-Menten kinetic behaviour. The latter is believed to be associated with mechanisms that include multiple reaction pathways due to enzymatic conformational fluctuations. Recent advances in single-molecule fluorescence spectroscopy have provided new fundamental insights on the possible mechanisms underlying reactions catalyzed by fluctuating enzymes. Here, we present a bottom-up approach to understand enzyme turnover kinetics at physiologically relevant mesoscopic concentrations informed by mechanisms extracted from single-molecule stochastic trajectories. The stochastic approach, presented here, shows the emergence of dynamic co-operativity in terms of a slowing down of the Michaelis-Menten (MM) kinetics resulting in negative co-operativity. For fewer enzymes, dynamic co-operativity emerges due to the combined effects of enzymatic conformational fluctuations and molecular discreteness. The increase in the number of enzymes, however, suppresses the effect of enzymatic conformational fluctuations such that dynamic co-operativity emerges solely due to the discrete changes in the number of reacting species. These results confirm that the turnover kinetics of fluctuating enzyme based on the parallel-pathway MM mechanism switches over to the single-pathway MM mechanism with the increase in the number of enzymes. For large enzyme numbers, convergence to the exact MM equation occurs in the limit of very high substrate concentration as the stochastic kinetics approaches the deterministic behaviour.
Emergence of dynamic cooperativity in the stochastic kinetics of fluctuating enzymes.
Kumar, Ashutosh; Chatterjee, Sambarta; Nandi, Mintu; Dua, Arti
2016-08-28
Dynamic co-operativity in monomeric enzymes is characterized in terms of a non-Michaelis-Menten kinetic behaviour. The latter is believed to be associated with mechanisms that include multiple reaction pathways due to enzymatic conformational fluctuations. Recent advances in single-molecule fluorescence spectroscopy have provided new fundamental insights on the possible mechanisms underlying reactions catalyzed by fluctuating enzymes. Here, we present a bottom-up approach to understand enzyme turnover kinetics at physiologically relevant mesoscopic concentrations informed by mechanisms extracted from single-molecule stochastic trajectories. The stochastic approach, presented here, shows the emergence of dynamic co-operativity in terms of a slowing down of the Michaelis-Menten (MM) kinetics resulting in negative co-operativity. For fewer enzymes, dynamic co-operativity emerges due to the combined effects of enzymatic conformational fluctuations and molecular discreteness. The increase in the number of enzymes, however, suppresses the effect of enzymatic conformational fluctuations such that dynamic co-operativity emerges solely due to the discrete changes in the number of reacting species. These results confirm that the turnover kinetics of fluctuating enzyme based on the parallel-pathway MM mechanism switches over to the single-pathway MM mechanism with the increase in the number of enzymes. For large enzyme numbers, convergence to the exact MM equation occurs in the limit of very high substrate concentration as the stochastic kinetics approaches the deterministic behaviour.
Emergence of dynamic cooperativity in the stochastic kinetics of fluctuating enzymes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Ashutosh; Chatterjee, Sambarta; Nandi, Mintu
Dynamic co-operativity in monomeric enzymes is characterized in terms of a non-Michaelis-Menten kinetic behaviour. The latter is believed to be associated with mechanisms that include multiple reaction pathways due to enzymatic conformational fluctuations. Recent advances in single-molecule fluorescence spectroscopy have provided new fundamental insights on the possible mechanisms underlying reactions catalyzed by fluctuating enzymes. Here, we present a bottom-up approach to understand enzyme turnover kinetics at physiologically relevant mesoscopic concentrations informed by mechanisms extracted from single-molecule stochastic trajectories. The stochastic approach, presented here, shows the emergence of dynamic co-operativity in terms of a slowing down of the Michaelis-Menten (MM) kineticsmore » resulting in negative co-operativity. For fewer enzymes, dynamic co-operativity emerges due to the combined effects of enzymatic conformational fluctuations and molecular discreteness. The increase in the number of enzymes, however, suppresses the effect of enzymatic conformational fluctuations such that dynamic co-operativity emerges solely due to the discrete changes in the number of reacting species. These results confirm that the turnover kinetics of fluctuating enzyme based on the parallel-pathway MM mechanism switches over to the single-pathway MM mechanism with the increase in the number of enzymes. For large enzyme numbers, convergence to the exact MM equation occurs in the limit of very high substrate concentration as the stochastic kinetics approaches the deterministic behaviour.« less
Thermodynamics: A Stirling effort
NASA Astrophysics Data System (ADS)
Horowitz, Jordan M.; Parrondo, Juan M. R.
2012-02-01
The realization of a single-particle Stirling engine pushes thermodynamics into stochastic territory where fluctuations dominate, and points towards a better understanding of energy transduction at the microscale.
The Human Side of Information's Converging Technology.
ERIC Educational Resources Information Center
Williams, Berney
1982-01-01
Discusses current issues in the design of information systems, noting contributions from three professions--computer science, human factors engineering, and information science. The eclectic nature of human factors engineering and the difficulty of drawing together studies with human engineering or software psychological components from diverse…
A non-linear dimension reduction methodology for generating data-driven stochastic input models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<
Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.
Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq
2016-01-01
This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.
Asynchronous Gossip for Averaging and Spectral Ranking
NASA Astrophysics Data System (ADS)
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Parameter identification using a creeping-random-search algorithm
NASA Technical Reports Server (NTRS)
Parrish, R. V.
1971-01-01
A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.
ODECS -- A computer code for the optimal design of S.I. engine control strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsie, I.; Pianese, C.; Rizzo, G.
1996-09-01
The computer code ODECS (Optimal Design of Engine Control Strategies) for the design of Spark Ignition engine control strategies is presented. This code has been developed starting from the author`s activity in this field, availing of some original contributions about engine stochastic optimization and dynamical models. This code has a modular structure and is composed of a user interface for the definition, the execution and the analysis of different computations performed with 4 independent modules. These modules allow the following calculations: (1) definition of the engine mathematical model from steady-state experimental data; (2) engine cycle test trajectory corresponding to amore » vehicle transient simulation test such as ECE15 or FTP drive test schedule; (3) evaluation of the optimal engine control maps with a steady-state approach; (4) engine dynamic cycle simulation and optimization of static control maps and/or dynamic compensation strategies, taking into account dynamical effects due to the unsteady fluxes of air and fuel and the influences of combustion chamber wall thermal inertia on fuel consumption and emissions. Moreover, in the last two modules it is possible to account for errors generated by a non-deterministic behavior of sensors and actuators and the related influences on global engine performances, and compute robust strategies, less sensitive to stochastic effects. In the paper the four models are described together with significant results corresponding to the simulation and the calculation of optimal control strategies for dynamic transient tests.« less
ERIC Educational Resources Information Center
Villano, Matt
2008-01-01
In this article, the author offers six best practices for physical and data security convergence. These are: (1) assess the cable plant; (2) choose wisely; (3) be patient; (4) engineer for high availability; (5) test the converged network to make sure it works; and (6) don't forget the humans.
Large Deviations and Transitions Between Equilibria for Stochastic Landau-Lifshitz-Gilbert Equation
NASA Astrophysics Data System (ADS)
Brzeźniak, Zdzisław; Goldys, Ben; Jegaraj, Terence
2017-11-01
We study a stochastic Landau-Lifshitz equation on a bounded interval and with finite dimensional noise. We first show that there exists a pathwise unique solution to this equation and that this solution enjoys the maximal regularity property. Next, we prove the large deviations principle for the small noise asymptotic of solutions using the weak convergence method. An essential ingredient of the proof is the compactness, or weak to strong continuity, of the solution map for a deterministic Landau-Lifschitz equation when considered as a transformation of external fields. We then apply this large deviations principle to show that small noise can cause magnetisation reversal. We also show the importance of the shape anisotropy parameter for reducing the disturbance of the solution caused by small noise. The problem is motivated by applications from ferromagnetic nanowires to the fabrication of magnetic memories.
NASA Technical Reports Server (NTRS)
Mehra, R. K.; Rouhani, R.; Jones, S.; Schick, I.
1980-01-01
A model to assess the value of improved information regarding the inventories, productions, exports, and imports of crop on a worldwide basis is discussed. A previously proposed model is interpreted in a stochastic control setting and the underlying assumptions of the model are revealed. In solving the stochastic optimization problem, the Markov programming approach is much more powerful and exact as compared to the dynamic programming-simulation approach of the original model. The convergence of a dual variable Markov programming algorithm is shown to be fast and efficient. A computer program for the general model of multicountry-multiperiod is developed. As an example, the case of one country-two periods is treated and the results are presented in detail. A comparison with the original model results reveals certain interesting aspects of the algorithms and the dependence of the value of information on the incremental cost function.
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
Nonperturbative renormalization group study of the stochastic Navier-Stokes equation.
Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo
2012-07-01
We study the renormalization group flow of the average action of the stochastic Navier-Stokes equation with power-law forcing. Using Galilean invariance, we introduce a nonperturbative approximation adapted to the zero-frequency sector of the theory in the parametric range of the Hölder exponent 4-2ε of the forcing where real-space local interactions are relevant. In any spatial dimension d, we observe the convergence of the resulting renormalization group flow to a unique fixed point which yields a kinetic energy spectrum scaling in agreement with canonical dimension analysis. Kolmogorov's -5/3 law is, thus, recovered for ε = 2 as also predicted by perturbative renormalization. At variance with the perturbative prediction, the -5/3 law emerges in the presence of a saturation in the ε dependence of the scaling dimension of the eddy diffusivity at ε = 3/2 when, according to perturbative renormalization, the velocity field becomes infrared relevant.
Extreme fluctuations in stochastic network coordination with time delays
NASA Astrophysics Data System (ADS)
Hunt, D.; Molnár, F.; Szymanski, B. K.; Korniss, G.
2015-12-01
We study the effects of uniform time delays on the extreme fluctuations in stochastic synchronization and coordination problems with linear couplings in complex networks. We obtain the average size of the fluctuations at the nodes from the behavior of the underlying modes of the network. We then obtain the scaling behavior of the extreme fluctuations with system size, as well as the distribution of the extremes on complex networks, and compare them to those on regular one-dimensional lattices. For large complex networks, when the delay is not too close to the critical one, fluctuations at the nodes effectively decouple, and the limit distributions converge to the Fisher-Tippett-Gumbel density. In contrast, fluctuations in low-dimensional spatial graphs are strongly correlated, and the limit distribution of the extremes is the Airy density. Finally, we also explore the effects of nonlinear couplings on the stability and on the extremes of the synchronization landscapes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared tomore » MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.« less
Wang, Jun-Sheng; Yang, Guang-Hong
2017-07-25
This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.
A finite-state, finite-memory minimum principle, part 2
NASA Technical Reports Server (NTRS)
Sandell, N. R., Jr.; Athans, M.
1975-01-01
In part 1 of this paper, a minimum principle was found for the finite-state, finite-memory (FSFM) stochastic control problem. In part 2, conditions for the sufficiency of the minimum principle are stated in terms of the informational properties of the problem. This is accomplished by introducing the notion of a signaling strategy. Then a min-H algorithm based on the FSFM minimum principle is presented. This algorithm converges, after a finite number of steps, to a person - by - person extremal solution.
2010-05-07
important for deep modular systems is that taking a series of small update steps and stopping before convergence, so called early stopping, is a form of regu...larization around the initial parameters of the system . For example, the stochastic gradient descent 5 1 u + 1 v = 1 6‖x2‖q = ‖x‖22q 22 Chapter 2...Aside from the overall speed of the classifier, no quantitative performance analysis was given, and the role played by the features in the larger system
NASA Astrophysics Data System (ADS)
Ma, Shuo; Kang, Yanmei
2018-04-01
In this paper, the exponential synchronization of stochastic neutral-type neural networks with time-varying delay and Lévy noise under non-Lipschitz condition is investigated for the first time. Using the general Itô's formula and the nonnegative semi-martingale convergence theorem, we derive general sufficient conditions of two kinds of exponential synchronization for the drive system and the response system with adaptive control. Numerical examples are presented to verify the effectiveness of the proposed criteria.
Si, Wenjie; Dong, Xunde; Yang, Feifei
2018-03-01
This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.
Chang, Joshua; Paydarfar, David
2014-12-01
Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.
Optimization of contrast resolution by genetic algorithm in ultrasound tissue harmonic imaging.
Ménigot, Sébastien; Girault, Jean-Marc
2016-09-01
The development of ultrasound imaging techniques such as pulse inversion has improved tissue harmonic imaging. Nevertheless, no recommendation has been made to date for the design of the waveform transmitted through the medium being explored. Our aim was therefore to find automatically the optimal "imaging" wave which maximized the contrast resolution without a priori information. To overcome assumption regarding the waveform, a genetic algorithm investigated the medium thanks to the transmission of stochastic "explorer" waves. Moreover, these stochastic signals could be constrained by the type of generator available (bipolar or arbitrary). To implement it, we changed the current pulse inversion imaging system by including feedback. Thus the method optimized the contrast resolution by adaptively selecting the samples of the excitation. In simulation, we benchmarked the contrast effectiveness of the best found transmitted stochastic commands and the usual fixed-frequency command. The optimization method converged quickly after around 300 iterations in the same optimal area. These results were confirmed experimentally. In the experimental case, the contrast resolution measured on a radiofrequency line could be improved by 6% with a bipolar generator and it could still increase by 15% with an arbitrary waveform generator. Copyright © 2016 Elsevier B.V. All rights reserved.
Intimate Partner Violence: A Stochastic Model.
Guidi, Elisa; Meringolo, Patrizia; Guazzini, Andrea; Bagnoli, Franco
2017-01-01
Intimate partner violence (IPV) has been a well-studied problem in the past psychological literature, especially through its classical methodology such as qualitative, quantitative and mixed methods. This article introduces two basic stochastic models as an alternative approach to simulate the short and long-term dynamics of a couple at risk of IPV. In both models, the members of the couple may assume a finite number of states, updating them in a probabilistic way at discrete time steps. After defining the transition probabilities, we first analyze the evolution of the couple in isolation and then we consider the case in which the individuals modify their behavior depending on the perceived violence from other couples in their environment or based on the perceived informal social support. While high perceived violence in other couples may converge toward the own presence of IPV by means a gender-specific transmission, the gender differences fade-out in the case of received informal social support. Despite the simplicity of the two stochastic models, they generate results which compare well with past experimental studies about IPV and they give important practical implications for prevention intervention in this field. Copyright: © 2016 by Fabrizio Serra editore, Pisa · Roma.
Theoretical study of a molecular turbine.
Perez-Carrasco, R; Sancho, J M
2013-10-01
We present an analytic and stochastic simulation study of a molecular engine working with a flux of particles as a turbine. We focus on the physical observables of velocity, flux, power, and efficiency. The control parameters are the external conservative force and the particle densities. We revise a simpler previous study by using a more realistic model containing multiple equidistant vanes complemented by stochastic simulations of the particles and the turbine. Here we show that the effect of the thermal fluctuations into the flux and the efficiency of these nanometric devices are relevant to the working scale of the system. The stochastic simulations of the Brownian motion of the particles and turbine support the simplified analytical calculations performed.
Developing stochastic model of thrust and flight dynamics for small UAVs
NASA Astrophysics Data System (ADS)
Tjhai, Chandra
This thesis presents a stochastic thrust model and aerodynamic model for small propeller driven UAVs whose power plant is a small electric motor. First a model which relates thrust generated by a small propeller driven electric motor as a function of throttle setting and commanded engine RPM is developed. A perturbation of this model is then used to relate the uncertainty in throttle and engine RPM commanded to the error in the predicted thrust. Such a stochastic model is indispensable in the design of state estimation and control systems for UAVs where the performance requirements of the systems are specied in stochastic terms. It is shown that thrust prediction models for small UAVs are not a simple, explicit functions relating throttle input and RPM command to thrust generated. Rather they are non-linear, iterative procedures which depend on a geometric description of the propeller and mathematical model of the motor. A detailed derivation of the iterative procedure is presented and the impact of errors which arise from inaccurate propeller and motor descriptions are discussed. Validation results from a series of wind tunnel tests are presented. The results show a favorable statistical agreement between the thrust uncertainty predicted by the model and the errors measured in the wind tunnel. The uncertainty model of aircraft aerodynamic coefficients developed based on wind tunnel experiment will be discussed at the end of this thesis.
Delgado, James E.; Wolt, Jeffrey D.
2011-01-01
In this study, we investigate the long-term exposure (20 weeks) to fumonisin B1 (FB1) in grower-finisher pigs by conducting a quantitative exposure assessment (QEA). Our analytical approach involved both deterministic and semi-stochastic modeling for dietary comparative analyses of FB1 exposures originating from genetically engineered Bacillus thuringiensis (Bt)-corn, conventional non-Bt corn and distiller’s dried grains with solubles (DDGS) derived from Bt and/or non-Bt corn. Results from both deterministic and semi-stochastic demonstrated a distinct difference of FB1 toxicity in feed between Bt corn and non-Bt corn. Semi-stochastic results predicted the lowest FB1 exposure for Bt grain with a mean of 1.5 mg FB1/kg diet and the highest FB1 exposure for a diet consisting of non-Bt grain and non-Bt DDGS with a mean of 7.87 mg FB1/kg diet; the chronic toxicological incipient level of concern is 1.0 mg of FB1/kg of diet. Deterministic results closely mirrored but tended to slightly under predict the mean result for the semi-stochastic analysis. This novel comparative QEA model reveals that diet scenarios where the source of grain is derived from Bt corn presents less potential to induce FB1 toxicity than diets containing non-Bt corn. PMID:21909298
NASA Astrophysics Data System (ADS)
Gukelberger, Jan; Kozik, Evgeny; Hafermann, Hartmut
2017-07-01
The dual fermion approach provides a formally exact prescription for calculating properties of a correlated electron system in terms of a diagrammatic expansion around dynamical mean-field theory (DMFT). Most practical implementations, however, neglect higher-order interaction vertices beyond two-particle scattering in the dual effective action and further truncate the diagrammatic expansion in the two-particle scattering vertex to a leading-order or ladder-type approximation. In this work, we compute the dual fermion expansion for the two-dimensional Hubbard model including all diagram topologies with two-particle interactions to high orders by means of a stochastic diagrammatic Monte Carlo algorithm. We benchmark the obtained self-energy against numerically exact diagrammatic determinant Monte Carlo simulations to systematically assess convergence of the dual fermion series and the validity of these approximations. We observe that, from high temperatures down to the vicinity of the DMFT Néel transition, the dual fermion series converges very quickly to the exact solution in the whole range of Hubbard interactions considered (4 ≤U /t ≤12 ), implying that contributions from higher-order vertices are small. As the temperature is lowered further, we observe slower series convergence, convergence to incorrect solutions, and ultimately divergence. This happens in a regime where magnetic correlations become significant. We find, however, that the self-consistent particle-hole ladder approximation yields reasonable and often even highly accurate results in this regime.
Intrinsic optimization using stochastic nanomagnets
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-01-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets. PMID:28295053
Intrinsic optimization using stochastic nanomagnets
NASA Astrophysics Data System (ADS)
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-03-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Li, X. M., E-mail: lixinmiaotju@163.com; Xu, J., E-mail: xujia-ld@163.com
A kind of magnetic shape memory alloy (MSMA) microgripper is proposed in this paper, and its nonlinear dynamic characteristics are studied when the stochastic perturbation is considered. Nonlinear differential items are introduced to explain the hysteretic phenomena of MSMA, and the constructive relationships among strain, stress, and magnetic field intensity are obtained by the partial least-square regression method. The nonlinear dynamic model of a MSMA microgripper subjected to in-plane stochastic excitation is developed. The stationary probability density function of the system’s response is obtained, the transition sets of the system are determined, and the conditions of stochastic bifurcation are obtained.more » The homoclinic and heteroclinic orbits of the system are given, and the boundary of the system’s safe basin is obtained by stochastic Melnikov integral method. The numerical and experimental results show that the system’s motion depends on its parameters, and stochastic Hopf bifurcation appears in the variation of the parameters; the area of the safe basin decreases with the increase of the stochastic excitation, and the boundary of the safe basin becomes fractal. The results of this paper are helpful for the application of MSMA microgripper in engineering fields.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Bi, Zedong; Zhou, Changsong
2016-01-01
In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to understand this influence under pair-wise additive spike-timing dependent plasticity (STDP) when the mean strength of plastic synapses into a neuron is bounded (synaptic homeostasis). Specifically, we systematically study, analytically and numerically, how four aspects of statistical features, i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations, as well as their interactions influence the efficacy variability in converging motifs (simple networks in which one neuron receives from many other neurons). Neurons (including the post-synaptic neuron) in a converging motif generate spikes according to statistical models with tunable parameters. In this way, we can explicitly control the statistics of the spike patterns, and investigate their influence onto the efficacy variability, without worrying about the feedback from synaptic changes onto the dynamics of the post-synaptic neuron. We separate efficacy variability into two parts: the drift part (DriftV) induced by the heterogeneity of change rates of different synapses, and the diffusion part (DiffV) induced by weight diffusion caused by stochasticity of spike trains. Our main findings are: (1) synchronous firing and burstiness tend to increase DiffV, (2) heterogeneity of rates induces DriftV when potentiation and depression in STDP are not balanced, and (3) heterogeneity of cross-correlations induces DriftV together with heterogeneity of rates. We anticipate our work important for understanding functional processes of neuronal networks (such as memory) and neural development. PMID:26941634
A general moment expansion method for stochastic kinetic models
NASA Astrophysics Data System (ADS)
Ale, Angelique; Kirk, Paul; Stumpf, Michael P. H.
2013-05-01
Moment approximation methods are gaining increasing attention for their use in the approximation of the stochastic kinetics of chemical reaction systems. In this paper we derive a general moment expansion method for any type of propensities and which allows expansion up to any number of moments. For some chemical reaction systems, more than two moments are necessary to describe the dynamic properties of the system, which the linear noise approximation is unable to provide. Moreover, also for systems for which the mean does not have a strong dependence on higher order moments, moment approximation methods give information about higher order moments of the underlying probability distribution. We demonstrate the method using a dimerisation reaction, Michaelis-Menten kinetics and a model of an oscillating p53 system. We show that for the dimerisation reaction and Michaelis-Menten enzyme kinetics system higher order moments have limited influence on the estimation of the mean, while for the p53 system, the solution for the mean can require several moments to converge to the average obtained from many stochastic simulations. We also find that agreement between lower order moments does not guarantee that higher moments will agree. Compared to stochastic simulations, our approach is numerically highly efficient at capturing the behaviour of stochastic systems in terms of the average and higher moments, and we provide expressions for the computational cost for different system sizes and orders of approximation. We show how the moment expansion method can be employed to efficiently quantify parameter sensitivity. Finally we investigate the effects of using too few moments on parameter estimation, and provide guidance on how to estimate if the distribution can be accurately approximated using only a few moments.
Stochastic Models of Human Errors
NASA Technical Reports Server (NTRS)
Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)
2002-01-01
Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.
A PDF projection method: A pressure algorithm for stand-alone transported PDFs
NASA Astrophysics Data System (ADS)
Ghorbani, Asghar; Steinhilber, Gerd; Markus, Detlev; Maas, Ulrich
2015-03-01
In this paper, a new formulation of the projection approach is introduced for stand-alone probability density function (PDF) methods. The method is suitable for applications in low-Mach number transient turbulent reacting flows. The method is based on a fractional step method in which first the advection-diffusion-reaction equations are modelled and solved within a particle-based PDF method to predict an intermediate velocity field. Then the mean velocity field is projected onto a space where the continuity for the mean velocity is satisfied. In this approach, a Poisson equation is solved on the Eulerian grid to obtain the mean pressure field. Then the mean pressure is interpolated at the location of each stochastic Lagrangian particle. The formulation of the Poisson equation avoids the time derivatives of the density (due to convection) as well as second-order spatial derivatives. This in turn eliminates the major sources of instability in the presence of stochastic noise that are inherent in particle-based PDF methods. The convergence of the algorithm (in the non-turbulent case) is investigated first by the method of manufactured solutions. Then the algorithm is applied to a one-dimensional turbulent premixed flame in order to assess the accuracy and convergence of the method in the case of turbulent combustion. As a part of this work, we also apply the algorithm to a more realistic flow, namely a transient turbulent reacting jet, in order to assess the performance of the method.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
NASA Technical Reports Server (NTRS)
Koenig, R. W.; Fishbach, L. H.
1972-01-01
A computer program entitled GENENG employs component performance maps to perform analytical, steady state, engine cycle calculations. Through a scaling procedure, each of the component maps can be used to represent a family of maps (different design values of pressure ratios, efficiency, weight flow, etc.) Either convergent or convergent-divergent nozzles may be used. Included is a complete FORTRAN 4 listing of the program. Sample results and input explanations are shown for one-spool and two-spool turbojets and two-spool separate- and mixed-flow turbofans operating at design and off-design conditions.
A new stochastic model considering satellite clock interpolation errors in precise point positioning
NASA Astrophysics Data System (ADS)
Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong
2018-03-01
Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Fuke, E-mail: wufuke@mail.hust.edu.cn; Tian, Tianhai, E-mail: tianhai.tian@sci.monash.edu.au; Rawlings, James B., E-mail: james.rawlings@wisc.edu
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in themore » work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.« less
1992-06-01
Anal. Appl. 102 (1984), 399-414. 43 On B-subgradients and applications Alejandro Jofre Departamento de Ingenieria Matemrtica, Universidad de Chile...Universitd de Provence51)2S Catania Italic 3, place Victor Hugo 13331 Marseille Cedex Steve Robinson Michel Th~raDepartment of Industrial Engineering
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
1991-01-01
The analytical derivations of the non-axial thrust divergence losses for convergent-divergent nozzles are described as well as how these calculations are embodied in the Navy/NASA engine computer program. The convergent-divergent geometries considered are simple classic axisymmetric nozzles, two dimensional rectangular nozzles, and axisymmetric and two dimensional plug nozzles. A simple, traditional, inviscid mathematical approach is used to deduce the influence of the ineffectual non-axial thrust as a function of the nozzle exit divergence angle.
Lee, Jong-Seok; Park, Cheol Hoon
2010-08-01
We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.
Dynamic stability of spinning pretwisted beams subjected to axial random forces
NASA Astrophysics Data System (ADS)
Young, T. H.; Gau, C. Y.
2003-11-01
This paper studies the dynamic stability of a pretwisted cantilever beam spinning along its longitudinal axis and subjected to an axial random force at the free end. The axial force is assumed as the sum of a constant force and a random process with a zero mean. Due to this axial force, the beam may experience parametric random instability. In this work, the finite element method is first applied to yield discretized system equations. The stochastic averaging method is then adopted to obtain Ito's equations for the response amplitudes of the system. Finally the mean-square stability criterion is utilized to determine the stability condition of the system. Numerical results show that the stability boundary of the system converges as the first three modes are taken into calculation. Before the convergence is reached, the stability condition predicted is not conservative enough.
Rumor Diffusion and Convergence during the 3.11 Earthquake: A Twitter Case Study
Takayasu, Misako; Sato, Kazuya; Sano, Yukie; Yamada, Kenta; Miura, Wataru; Takayasu, Hideki
2015-01-01
We focus on Internet rumors and present an empirical analysis and simulation results of their diffusion and convergence during emergencies. In particular, we study one rumor that appeared in the immediate aftermath of the Great East Japan Earthquake on March 11, 2011, which later turned out to be misinformation. By investigating whole Japanese tweets that were sent one week after the quake, we show that one correction tweet, which originated from a city hall account, diffused enormously. We also demonstrate a stochastic agent-based model, which is inspired by contagion model of epidemics SIR, can reproduce observed rumor dynamics. Our model can estimate the rumor infection rate as well as the number of people who still believe in the rumor that cannot be observed directly. For applications, rumor diffusion sizes can be estimated in various scenarios by combining our model with the real data. PMID:25831122
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
Some variance reduction methods for numerical stochastic homogenization
Blanc, X.; Le Bris, C.; Legoll, F.
2016-01-01
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
Study on Nonlinear Vibration Analysis of Gear System with Random Parameters
NASA Astrophysics Data System (ADS)
Tong, Cao; Liu, Xiaoyuan; Fan, Li
2018-03-01
In order to study the dynamic characteristics of gear nonlinear vibration system and the influence of random parameters, firstly, a nonlinear stochastic vibration analysis model of gear 3-DOF is established based on Newton’s Law. And the random response of gear vibration is simulated by stepwise integration method. Secondly, the influence of stochastic parameters such as meshing damping, tooth side gap and excitation frequency on the dynamic response of gear nonlinear system is analyzed by using the stability analysis method such as bifurcation diagram and Lyapunov exponent method. The analysis shows that the stochastic process can not be neglected, which can cause the random bifurcation and chaos of the system response. This study will provide important reference value for vibration engineering designers.
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
Nguyen, A; Yosinski, J; Clune, J
2016-01-01
The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.
Nonconservative Forces via Quantum Reservoir Engineering
NASA Astrophysics Data System (ADS)
Vuglar, Shanon L.; Zhdanov, Dmitry V.; Cabrera, Renan; Seideman, Tamar; Jarzynski, Christopher; Bondar, Denys I.
2018-06-01
A systematic approach is given for engineering dissipative environments that steer quantum wave packets along desired trajectories. The methodology is demonstrated with several illustrative examples: environment-assisted tunneling, trapping, effective mass assignment, and pseudorelativistic behavior. Nonconservative stochastic forces do not inevitably lead to decoherence—we show that purity can be well preserved. These findings highlight the flexibility offered by nonequilibrium open quantum dynamics.
Maximizing Federal IT Dollars: A Connection Between IT Investments and Organizational Performance
2011-04-01
Theory for investments, where diversification of financial assets (stocks, bonds, and cash) is balanced by expected returns and risk (Markowitz, 1952...Stakeholder satisfaction (stakeholder may not pay proportionally for service) Stakeholders Stockholders , owners, market Taxpayers; legislative...Adviser for Off-Campus Programs in the Department of Engineering Manage- ment and Systems Engineering. His current research interests include stochastic
Nonlinear Analysis of Mechanical Systems Under Combined Harmonic and Stochastic Excitation
1993-05-27
Namachchivaya and Naresh Malhotra Department of Aeronautical and Astronautical Engineering University of Illinois, Urbana-Champaign Urbana, Illinois...Aeronauticai and Astronautical Engineering, University of Illinois, 1991. 2. N. Sri Namachchivaya and N. Malhotra , Parametrically Excited Hopf Bifurcation...Namachchivaya and N. Malhotra , Parametrically Excited Hopf Bifurcation with Non-semisimple 1:1 Resonance, Nonlinear Vibrations, ASME-AMD, Vol. 114, 1992. 3
Haddad, Tarek; Himes, Adam; Thompson, Laura; Irony, Telba; Nair, Rajesh
2017-01-01
Evaluation of medical devices via clinical trial is often a necessary step in the process of bringing a new product to market. In recent years, device manufacturers are increasingly using stochastic engineering models during the product development process. These models have the capability to simulate virtual patient outcomes. This article presents a novel method based on the power prior for augmenting a clinical trial using virtual patient data. To properly inform clinical evaluation, the virtual patient model must simulate the clinical outcome of interest, incorporating patient variability, as well as the uncertainty in the engineering model and in its input parameters. The number of virtual patients is controlled by a discount function which uses the similarity between modeled and observed data. This method is illustrated by a case study of cardiac lead fracture. Different discount functions are used to cover a wide range of scenarios in which the type I error rates and power vary for the same number of enrolled patients. Incorporation of engineering models as prior knowledge in a Bayesian clinical trial design can provide benefits of decreased sample size and trial length while still controlling type I error rate and power.
Elsaadany, Mostafa; Yan, Karen Chang; Yildirim-Ayan, Eda
2017-06-01
Successful tissue engineering and regenerative therapy necessitate having extensive knowledge about mechanical milieu in engineered tissues and the resident cells. In this study, we have merged two powerful analysis tools, namely finite element analysis and stochastic analysis, to understand the mechanical strain within the tissue scaffold and residing cells and to predict the cell viability upon applying mechanical strains. A continuum-based multi-length scale finite element model (FEM) was created to simulate the physiologically relevant equiaxial strain exposure on cell-embedded tissue scaffold and to calculate strain transferred to the tissue scaffold (macro-scale) and residing cells (micro-scale) upon various equiaxial strains. The data from FEM were used to predict cell viability under various equiaxial strain magnitudes using stochastic damage criterion analysis. The model validation was conducted through mechanically straining the cardiomyocyte-encapsulated collagen constructs using a custom-built mechanical loading platform (EQUicycler). FEM quantified the strain gradients over the radial and longitudinal direction of the scaffolds and the cells residing in different areas of interest. With the use of the experimental viability data, stochastic damage criterion, and the average cellular strains obtained from multi-length scale models, cellular viability was predicted and successfully validated. This methodology can provide a great tool to characterize the mechanical stimulation of bioreactors used in tissue engineering applications in providing quantification of mechanical strain and predicting cellular viability variations due to applied mechanical strain.
NASA Astrophysics Data System (ADS)
Shintani, Masaru; Umeno, Ken
2018-04-01
The power law is present ubiquitously in nature and in our societies. Therefore, it is important to investigate the characteristics of power laws in the current era of big data. In this paper we prove that the superposition of non-identical stochastic processes with power laws converges in density to a unique stable distribution. This property can be used to explain the universality of stable laws that the sums of the logarithmic returns of non-identical stock price fluctuations follow stable distributions.
ERIC Educational Resources Information Center
de Vere, Ian; Melles, Gavin; Kapoor, Ajay
2010-01-01
Product design is the convergence point for engineering and design thinking and practices. Until recently, product design has been taught either as a component of mechanical engineering or as a subject within design schools but increasingly there is global recognition of the need for greater synergies between industrial design and engineering…
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.
2015-03-01
This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
Investigation of installation effects of single-engine convergent-divergent nozzles
NASA Technical Reports Server (NTRS)
Burley, J. R., II; Berrier, B. L.
1982-01-01
An investigation was conducted in the Langley 16-Foot Transonic Tunnel to determine installation effects on single-engine convergent-divergent nozzles applicable to reduced-power supersonic cruise aircraft. Tests were conducted at Mach numbers from 0.50 to 1.20, at angles of attack from -3 degrees to 9 degrees, and at nozzle pressure ratios from 1.0 (jet off) to 8.0. The effects of empennage arrangement, nozzle length, a cusp fairing, and afterbody closure on total aft-end drag coefficient and component drag coefficients were investigated. Basic lift- and drag-coefficient data and external static-pressure distributions on the nozzle and afterbody are presented and discussed.
Jet Engine Control Using Ethernet with a BRAIN (Postprint)
2008-07-01
current communications may be mitigated. 15. SUBJECT TERMS BRAIN, Braided Ring Availability Integrity Network, Gas turbine, FADEC , disturbed...urrent state of the art engine controls have converged on the notion of the Full Authority Digital Engine Control ( FADEC ), which consists of a centralized...is completely dependent on the proper operation of the controller. In current systems, the FADEC is often located on the relatively cool engine fan
Investigation of the stochastic nature of solar radiation for renewable resources management
NASA Astrophysics Data System (ADS)
Koudouris, Giannis; Dimitriadis, Panayiotis; Iliopoulou, Theano; Mamasis, Nikos; Koutsoyiannis, Demetris
2017-04-01
A detailed investigation of the variability of solar radiation can be proven useful towards more efficient and sustainable design of renewable resources systems. This variability is mainly caused from the regular seasonal and diurnal variation, as well as its stochastic nature of the atmospheric processes, i.e. sunshine duration. In this context, we analyze numerous observations in Greece (Hellenic National Meteorological Service; http://www.hnms.gr/) and around the globe (NASA SSE - Surface meteorology and Solar Energy; http://www.soda-pro.com/web-services/radiation/nasa-sse) and we investigate the long-term behaviour and double periodicity of the solar radiation process. Also, we apply a parsimonious double-cyclostationary stochastic model to a theoretical scenario of solar energy production for an island in the Aegean Sea. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
SDE decomposition and A-type stochastic interpretation in nonequilibrium processes
NASA Astrophysics Data System (ADS)
Yuan, Ruoshi; Tang, Ying; Ao, Ping
2017-12-01
An innovative theoretical framework for stochastic dynamics based on the decomposition of a stochastic differential equation (SDE) into a dissipative component, a detailed-balance-breaking component, and a dual-role potential landscape has been developed, which has fruitful applications in physics, engineering, chemistry, and biology. It introduces the A-type stochastic interpretation of the SDE beyond the traditional Ito or Stratonovich interpretation or even the α-type interpretation for multidimensional systems. The potential landscape serves as a Hamiltonian-like function in nonequilibrium processes without detailed balance, which extends this important concept from equilibrium statistical physics to the nonequilibrium region. A question on the uniqueness of the SDE decomposition was recently raised. Our review of both the mathematical and physical aspects shows that uniqueness is guaranteed. The demonstration leads to a better understanding of the robustness of the novel framework. In addition, we discuss related issues including the limitations of an approach to obtaining the potential function from a steady-state distribution.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Dearden, Richard; Benazera, Emmanuel
2004-01-01
Fault detection and isolation are critical tasks to ensure correct operation of systems. When we consider stochastic hybrid systems, diagnosis algorithms need to track both the discrete mode and the continuous state of the system in the presence of noise. Deterministic techniques like Livingstone cannot deal with the stochasticity in the system and models. Conversely Bayesian belief update techniques such as particle filters may require many computational resources to get a good approximation of the true belief state. In this paper we propose a fault detection and isolation architecture for stochastic hybrid systems that combines look-ahead Rao-Blackwellized Particle Filters (RBPF) with the Livingstone 3 (L3) diagnosis engine. In this approach RBPF is used to track the nominal behavior, a novel n-step prediction scheme is used for fault detection and L3 is used to generate a set of candidates that are consistent with the discrepant observations which then continue to be tracked by the RBPF scheme.
Adaptive Neural Tracking Control for Switched High-Order Stochastic Nonlinear Systems.
Zhao, Xudong; Wang, Xinyong; Zong, Guangdeng; Zheng, Xiaolong
2017-10-01
This paper deals with adaptive neural tracking control design for a class of switched high-order stochastic nonlinear systems with unknown uncertainties and arbitrary deterministic switching. The considered issues are: 1) completely unknown uncertainties; 2) stochastic disturbances; and 3) high-order nonstrict-feedback system structure. The considered mathematical models can represent many practical systems in the actual engineering. By adopting the approximation ability of neural networks, common stochastic Lyapunov function method together with adding an improved power integrator technique, an adaptive state feedback controller with multiple adaptive laws is systematically designed for the systems. Subsequently, a controller with only two adaptive laws is proposed to solve the problem of over parameterization. Under the designed controllers, all the signals in the closed-loop system are bounded-input bounded-output stable in probability, and the system output can almost surely track the target trajectory within a specified bounded error. Finally, simulation results are presented to show the effectiveness of the proposed approaches.
Competitive Tradeoff Modeling: Methodology, Computation, and Testing
1997-12-01
variational inequalities produced the dissertation of Ozge [4], which presented and justified a new method for numerical solution of stochastic...Philosophy (Industrial Engineering) in 1996. • A. Yonca Ozge , Research Assistant. Ms. Ozge received the degree of Doctor of Philosophy (Industrial...Ph.D. Disserta- tion, Department of Industrial Engineering, University of Wisconsin- Madison, 1996. [2] G. Gürkan, A. Y. Ozge , and S. M. Robinson
Extracting Work from Quantum Measurement in Maxwell's Demon Engines
NASA Astrophysics Data System (ADS)
Elouard, Cyril; Herrera-Martí, David; Huard, Benjamin; Auffèves, Alexia
2017-06-01
The essence of both classical and quantum engines is to extract useful energy (work) from stochastic energy sources, e.g., thermal baths. In Maxwell's demon engines, work extraction is assisted by a feedback control based on measurements performed by a demon, whose memory is erased at some nonzero energy cost. Here we propose a new type of quantum Maxwell's demon engine where work is directly extracted from the measurement channel, such that no heat bath is required. We show that in the Zeno regime of frequent measurements, memory erasure costs eventually vanish. Our findings provide a new paradigm to analyze quantum heat engines and work extraction in the quantum world.
Stochastic metallic-glass cellular structures exhibiting benchmark strength.
Demetriou, Marios D; Veazey, Chris; Harmon, John S; Schramm, Joseph P; Johnson, William L
2008-10-03
By identifying the key characteristic "structural scales" that dictate the resistance of a porous metallic glass against buckling and fracture, stochastic highly porous metallic-glass structures are designed capable of yielding plastically and inheriting the high plastic yield strength of the amorphous metal. The strengths attainable by the present foams appear to equal or exceed those by highly engineered metal foams such as Ti-6Al-4V or ferrous-metal foams at comparable levels of porosity, placing the present metallic-glass foams among the strongest foams known to date.
Python-based geometry preparation and simulation visualization toolkits for STEPS
Chen, Weiliang; De Schutter, Erik
2014-01-01
STEPS is a stochastic reaction-diffusion simulation engine that implements a spatial extension of Gillespie's Stochastic Simulation Algorithm (SSA) in complex tetrahedral geometries. An extensive Python-based interface is provided to STEPS so that it can interact with the large number of scientific packages in Python. However, a gap existed between the interfaces of these packages and the STEPS user interface, where supporting toolkits could reduce the amount of scripting required for research projects. This paper introduces two new supporting toolkits that support geometry preparation and visualization for STEPS simulations. PMID:24782754
Physical realizability of continuous-time quantum stochastic walks
NASA Astrophysics Data System (ADS)
Taketani, Bruno G.; Govia, Luke C. G.; Wilhelm, Frank K.
2018-05-01
Quantum walks are a promising methodology that can be used to both understand and implement quantum information processing tasks. The quantum stochastic walk is a recently developed framework that combines the concept of a quantum walk with that of a classical random walk, through open system evolution of a quantum system. Quantum stochastic walks have been shown to have applications in as far reaching fields as artificial intelligence. However, there are significant constraints on the kind of open system evolutions that can be realized in a physical experiment. In this work, we discuss the restrictions on the allowed open system evolution and the physical assumptions underpinning them. We show that general direct implementations would require the complete solution of the underlying unitary dynamics and sophisticated reservoir engineering, thus weakening the benefits of experimental implementation.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Quantifying parameter uncertainty in stochastic models using the Box Cox transformation
NASA Astrophysics Data System (ADS)
Thyer, Mark; Kuczera, George; Wang, Q. J.
2002-08-01
The Box-Cox transformation is widely used to transform hydrological data to make it approximately Gaussian. Bayesian evaluation of parameter uncertainty in stochastic models using the Box-Cox transformation is hindered by the fact that there is no analytical solution for the posterior distribution. However, the Markov chain Monte Carlo method known as the Metropolis algorithm can be used to simulate the posterior distribution. This method properly accounts for the nonnegativity constraint implicit in the Box-Cox transformation. Nonetheless, a case study using the AR(1) model uncovered a practical problem with the implementation of the Metropolis algorithm. The use of a multivariate Gaussian jump distribution resulted in unacceptable convergence behaviour. This was rectified by developing suitable parameter transformations for the mean and variance of the AR(1) process to remove the strong nonlinear dependencies with the Box-Cox transformation parameter. Applying this methodology to the Sydney annual rainfall data and the Burdekin River annual runoff data illustrates the efficacy of these parameter transformations and demonstrate the value of quantifying parameter uncertainty.
Explaining opinion polarisation with opinion copulas.
Askitas, Nikolaos
2017-01-01
An empirically founded and widely established driving force in opinion dynamics is homophily i.e. the tendency of "birds of a feather" to "flock together". The closer our opinions are the more likely it is that we will interact and converge. Models using these assumptions are called bounded confidence models (BCM) as they assume a tolerance threshold after which interaction is unlikely. They are known to produce one or more clusters, depending on the size of the bound, with more than one cluster being possible only in the deterministic case. Introducing noise, as is likely to happen in a stochastic world, causes BCM to produce consensus which leaves us with the open problem of explaining the emergence and sustainance of opinion clusters and polarisation. We investigate the role of heterogeneous priors in opinion formation, introduce the concept of opinion copulas, argue that it is well supported by findings in Social Psychology and use it to show that the stochastic BCM does indeed produce opinion clustering without the need for extra assumptions.
Explaining opinion polarisation with opinion copulas
2017-01-01
An empirically founded and widely established driving force in opinion dynamics is homophily i.e. the tendency of “birds of a feather” to “flock together”. The closer our opinions are the more likely it is that we will interact and converge. Models using these assumptions are called bounded confidence models (BCM) as they assume a tolerance threshold after which interaction is unlikely. They are known to produce one or more clusters, depending on the size of the bound, with more than one cluster being possible only in the deterministic case. Introducing noise, as is likely to happen in a stochastic world, causes BCM to produce consensus which leaves us with the open problem of explaining the emergence and sustainance of opinion clusters and polarisation. We investigate the role of heterogeneous priors in opinion formation, introduce the concept of opinion copulas, argue that it is well supported by findings in Social Psychology and use it to show that the stochastic BCM does indeed produce opinion clustering without the need for extra assumptions. PMID:28829802
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
NASA Astrophysics Data System (ADS)
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
NASA Astrophysics Data System (ADS)
Zhang, Kai; Li, Jingzhi; He, Zhubin; Yan, Wanfeng
2018-07-01
In this paper, a stochastic optimization framework is proposed to address the microgrid energy dispatching problem with random renewable generation and vehicle activity pattern, which is closer to the practical applications. The patterns of energy generation, consumption and storage availability are all random and unknown at the beginning, and the microgrid controller design (MCD) is formulated as a Markov decision process (MDP). Hence, an online learning-based control algorithm is proposed for the microgrid, which could adapt the control policy with increasing knowledge of the system dynamics and converges to the optimal algorithm. We adopt the linear approximation idea to decompose the original value functions as the summation of each per-battery value function. As a consequence, the computational complexity is significantly reduced from exponential growth to linear growth with respect to the size of battery states. Monte Carlo simulation of different scenarios demonstrates the effectiveness and efficiency of our algorithm.
Optimal Linear Responses for Markov Chains and Stochastically Perturbed Dynamical Systems
NASA Astrophysics Data System (ADS)
Antown, Fadi; Dragičević, Davor; Froyland, Gary
2018-03-01
The linear response of a dynamical system refers to changes to properties of the system when small external perturbations are applied. We consider the little-studied question of selecting an optimal perturbation so as to (i) maximise the linear response of the equilibrium distribution of the system, (ii) maximise the linear response of the expectation of a specified observable, and (iii) maximise the linear response of the rate of convergence of the system to the equilibrium distribution. We also consider the inhomogeneous, sequential, or time-dependent situation where the governing dynamics is not stationary and one wishes to select a sequence of small perturbations so as to maximise the overall linear response at some terminal time. We develop the theory for finite-state Markov chains, provide explicit solutions for some illustrative examples, and numerically apply our theory to stochastically perturbed dynamical systems, where the Markov chain is replaced by a matrix representation of an approximate annealed transfer operator for the random dynamical system.
A stochastic two-scale model for pressure-driven flow between rough surfaces
Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas
2016-01-01
Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975
NASA Astrophysics Data System (ADS)
Liu, Zhiyuan; Meng, Qiang
2014-05-01
This paper focuses on modelling the network flow equilibrium problem on a multimodal transport network with bus-based park-and-ride (P&R) system and congestion pricing charges. The multimodal network has three travel modes: auto mode, transit mode and P&R mode. A continuously distributed value-of-time is assumed to convert toll charges and transit fares to time unit, and the users' route choice behaviour is assumed to follow the probit-based stochastic user equilibrium principle with elastic demand. These two assumptions have caused randomness to the users' generalised travel times on the multimodal network. A comprehensive network framework is first defined for the flow equilibrium problem with consideration of interactions between auto flows and transit (bus) flows. Then, a fixed-point model with unique solution is proposed for the equilibrium flows, which can be solved by a convergent cost averaging method. Finally, the proposed methodology is tested by a network example.
Assessing predictability of a hydrological stochastic-dynamical system
NASA Astrophysics Data System (ADS)
Gelfan, Alexander
2014-05-01
The water cycle includes the processes with different memory that creates potential for predictability of hydrological system based on separating its long and short memory components and conditioning long-term prediction on slower evolving components (similar to approaches in climate prediction). In the face of the Panta Rhei IAHS Decade questions, it is important to find a conceptual approach to classify hydrological system components with respect to their predictability, define predictable/unpredictable patterns, extend lead-time and improve reliability of hydrological predictions based on the predictable patterns. Representation of hydrological systems as the dynamical systems subjected to the effect of noise (stochastic-dynamical systems) provides possible tool for such conceptualization. A method has been proposed for assessing predictability of hydrological system caused by its sensitivity to both initial and boundary conditions. The predictability is defined through a procedure of convergence of pre-assigned probabilistic measure (e.g. variance) of the system state to stable value. The time interval of the convergence, that is the time interval during which the system losses memory about its initial state, defines limit of the system predictability. The proposed method was applied to assess predictability of soil moisture dynamics in the Nizhnedevitskaya experimental station (51.516N; 38.383E) located in the agricultural zone of the central European Russia. A stochastic-dynamical model combining a deterministic one-dimensional model of hydrothermal regime of soil with a stochastic model of meteorological inputs was developed. The deterministic model describes processes of coupled heat and moisture transfer through unfrozen/frozen soil and accounts for the influence of phase changes on water flow. The stochastic model produces time series of daily meteorological variables (precipitation, air temperature and humidity), whose statistical properties are similar to those of the corresponding series of the actual data measured at the station. Beginning from the initial conditions and being forced by Monte-Carlo generated synthetic meteorological series, the model simulated diverging trajectories of soil moisture characteristics (water content of soil column, moisture of different soil layers, etc.). Limit of predictability of the specific characteristic was determined through time of stabilization of variance of the characteristic between the trajectories, as they move away from the initial state. Numerical experiments were carried out with the stochastic-dynamical model to analyze sensitivity of the soil moisture predictability assessments to uncertainty in the initial conditions, to determine effects of the soil hydraulic properties and processes of soil freezing on the predictability. It was found, particularly, that soil water content predictability is sensitive to errors in the initial conditions and strongly depends on the hydraulic properties of soil under both unfrozen and frozen conditions. Even if the initial conditions are "well-established", the assessed predictability of water content of unfrozen soil does not exceed 30-40 days, while for frozen conditions it may be as long as 3-4 months. The latter creates opportunity for utilizing the autumn water content of soil as the predictor for spring snowmelt runoff in the region under consideration.
ERIC Educational Resources Information Center
Lammi, Matthew; Becker, Kurt
2013-01-01
Engineering design thinking is "a complex cognitive process" including divergence-convergence, a systems perspective, ambiguity, and collaboration (Dym, Agogino, Eris, Frey, & Leifer, 2005, p. 104). Design is often complex, involving multiple levels of interacting components within a system that may be nested within or connected to other systems.…
NASA Technical Reports Server (NTRS)
Zak, Michail
1994-01-01
This paper presents and discusses physical models for simulating some aspects of neural intelligence, and, in particular, the process of cognition. The main departure from the classical approach here is in utilization of a terminal version of classical dynamics introduced by the author earlier. Based upon violations of the Lipschitz condition at equilibrium points, terminal dynamics attains two new fundamental properties: it is spontaneous and nondeterministic. Special attention is focused on terminal neurodynamics as a particular architecture of terminal dynamics which is suitable for modeling of information flows. Terminal neurodynamics possesses a well-organized probabilistic structure which can be analytically predicted, prescribed, and controlled, and therefore which presents a powerful tool for modeling real-life uncertainties. Two basic phenomena associated with random behavior of neurodynamic solutions are exploited. The first one is a stochastic attractor ; a stable stationary stochastic process to which random solutions of a closed system converge. As a model of the cognition process, a stochastic attractor can be viewed as a universal tool for generalization and formation of classes of patterns. The concept of stochastic attractor is applied to model a collective brain paradigm explaining coordination between simple units of intelligence which perform a collective task without direct exchange of information. The second fundamental phenomenon discussed is terminal chaos which occurs in open systems. Applications of terminal chaos to information fusion as well as to explanation and modeling of coordination among neurons in biological systems are discussed. It should be emphasized that all the models of terminal neurodynamics are implementable in analog devices, which means that all the cognition processes discussed in the paper are reducible to the laws of Newtonian mechanics.
to do so, and (5) three distinct versions of the problem of estimating component reliability from system failure-time data are treated, each resulting inconsistent estimators with asymptotically normal distributions.
Scenario generation for stochastic optimization problems via the sparse grid method
Chen, Michael; Mehrotra, Sanjay; Papp, David
2015-04-19
We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less
Site correction of stochastic simulation in southwestern Taiwan
NASA Astrophysics Data System (ADS)
Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan
2014-05-01
Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.
Green's Function and Stress Fields in Stochastic Heterogeneous Continua
NASA Astrophysics Data System (ADS)
Negi, Vineet
Many engineering materials used today are heterogenous in composition e.g. Composites - Polymer Matrix Composites, Metal Matrix Composites. Even, conventional engineering materials - metals, plastics, alloys etc. - may develop heterogeneities, like inclusions and residual stresses, during the manufacturing process. Moreover, these materials may also have intrinsic heterogeneities at a nanoscale in the form of grain boundaries in metals, crystallinity in amorphous polymers etc. While, the homogenized constitutive models for these materials may be satisfactory at a macroscale, recent studies of phenomena like fatigue failure, void nucleation, size-dependent brittle-ductile transition in polymeric nanofibers reveal a major play of micro/nanoscale physics in these phenomena. At this scale, heterogeneities in a material may no longer be ignored. Thus, this demands a study into the effects of various material heterogeneities. In this work, spatial heterogeneities in two material properties - elastic modulus and yield stress - have been investigated separately. The heterogeneity in the elastic modulus is studied in the context of Green's function. The Stochastic Finite Element method is adopted to get the mean statistics of the Green's function defined on a stochastic heterogeneous 2D infinite space. A study of the elastic-plastic transition in a domain having stochastic heterogenous yield stress was done using Mont-Carlo methods. The statistics for various stress and strain fields during the transition were obtained. Further, the effects of size of the domain and the strain-hardening rate on the stress fields during the heterogeneous elastic-plastic transition were investigated. Finally, a case is made for the role of the heterogenous elastic-plastic transition in damage nucleation and growth.
The Grid Density Dependence of the Unsteady Pressures of the J-2X Turbines
NASA Technical Reports Server (NTRS)
Schmauch, Preston B.
2011-01-01
The J-2X engine was originally designed for the upper stage of the cancelled Crew Launch Vehicle. Although the Crew Launch Vehicle was cancelled the J-2X engine, which is currently undergoing hot-fire testing, may be used on future programs. The J-2X engine is a direct descendent of the J-2 engine which powered the upper stage during the Apollo program. Many changes including a thrust increase from 230K to 294K lbf have been implemented in this engine. As part of the design requirements, the turbine blades must meet minimum high cycle fatigue factors of safety for various vibrational modes that have resonant frequencies in the engine's operating range. The unsteady blade loading is calculated directly from CFD simulations. A grid density study was performed to understand the sensitivity of the spatial loading and the magnitude of the on blade loading due to changes in grid density. Given that the unsteady blade loading has a first order effect on the high cycle fatigue factors of safety, it is important to understand the level of convergence when applying the unsteady loads. The convergence of the unsteady pressures of several grid densities will be presented for various frequencies in the engine's operating range.
Computational Aerodynamic Analysis of the Flow Field about a Hypervelocity Test Sled
2002-03-01
scheme would capture the steady solution without the convergence difficulties. 6-3 Bibliography 1. Amtec Engineering Incorporated, Bellevue WA. CFD...Analyzer User’s Manual, Version 2.0 , 1999. Electronic documentation, http://www.amtec.com. 2. Amtec Engineering Incorporated, Bellevue WA. Tecplot User’s
Efficiency of single-particle engines
NASA Astrophysics Data System (ADS)
Proesmans, Karel; Driesen, Cedric; Cleuren, Bart; Van den Broeck, Christian
2015-09-01
We study the efficiency of a single-particle Szilard and Carnot engine. Within a first order correction to the quasistatic limit, the work distribution is found to be Gaussian and the correction factor to average work and efficiency only depends on the piston speed. The stochastic efficiency is studied for both models and the recent findings on efficiency fluctuations are confirmed numerically. Special features are revealed in the zero-temperature limit.
An Overview of Recent Phased Array Measurements at NASA Glenn
NASA Technical Reports Server (NTRS)
Podboy, Gary G.
2008-01-01
A review of measurements made at the NASA Glenn Research Center using an OptiNAV Array 48 phased array system is provided. Data were acquired on a series of round convergent and convergent-divergent nozzles using the Small Hot Jet Acoustic Rig. Tests were conducted over a range of jet operating conditions, including subsonic and supersonic and cold and hot jets. Phased array measurements were also acquired on a Williams International FJ44 engine. These measurements show how the noise generated by the engine is split between the inlet-radiated and exhaust-radiated components. The data also show inlet noise being reflected off of the inflow control device used during the test.
Color engineering in the age of digital convergence
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay W.
1998-09-01
Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.
ERIC Educational Resources Information Center
Munoz-Organero, Mario; Ramirez, Gustavo A.; Merino, Pedro Munoz; Kloos, Carlos Delgado
2010-01-01
The use of swarm intelligence techniques in e-learning scenarios provides a way to combine simple interactions of individual students to solve a more complex problem. After getting some data from the interactions of the first students with a central system, the use of these techniques converges to a solution that the rest of the students can…
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei
2018-03-01
The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.
NASA Astrophysics Data System (ADS)
Hu, D. L.; Liu, X. B.
Both periodic loading and random forces commonly co-exist in real engineering applications. However, the dynamic behavior, especially dynamic stability of systems under parametric periodic and random excitations has been reported little in the literature. In this study, the moment Lyapunov exponent and stochastic stability of binary airfoil under combined harmonic and non-Gaussian colored noise excitations are investigated. The noise is simplified to an Ornstein-Uhlenbeck process by applying the path-integral method. Via the singular perturbation method, the second-order expansions of the moment Lyapunov exponent are obtained, which agree well with the results obtained by the Monte Carlo simulation. Finally, the effects of the noise and parametric resonance (such as subharmonic resonance and combination additive resonance) on the stochastic stability of the binary airfoil system are discussed.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
Investigation of the stochastic nature of temperature and humidity for energy management
NASA Astrophysics Data System (ADS)
Hadjimitsis, Evanthis; Demetriou, Evangelos; Sakellari, Katerina; Tyralis, Hristos; Iliopoulou, Theano; Koutsoyiannis, Demetris
2017-04-01
Atmospheric temperature and dew point, in addition to their role in atmospheric processes, influence the management of energy systems since they highly affect the energy demand and production. Both temperature and humidity depend on the climate conditions and geographical location. In this context, we analyze numerous of observations around the globe and we investigate the long-term behaviour and periodicities of the temperature and humidity processes. Also, we present and apply a parsimonious stochastic double-cyclostationary model for these processes to an island in the Aegean Sea and investigate their link to energy management. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Convergence of methods for coupling of microscopic and mesoscopic reaction-diffusion simulations
NASA Astrophysics Data System (ADS)
Flegg, Mark B.; Hellander, Stefan; Erban, Radek
2015-05-01
In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is called the ghost cell method (GCM), since it works by constructing a "ghost cell" in which molecules can disappear and jump into the compartment-based simulation. Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step Δt (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter h, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered: Δt → 0 and h is fixed; Δt → 0 and h → 0 such that √{ Δt } / h is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
ERIC Educational Resources Information Center
McCrank, Lawrence J.
1992-01-01
Discusses trends in the fields of knowledge engineering and historical sciences to speculate about possibilities of converging interests and applications. Topics addressed include artificial intelligence and expert systems; the history of information science; history as a related field; historians as information scientists; multidisciplinary…
ConvAn: a convergence analyzing tool for optimization of biochemical networks.
Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils
2012-01-01
Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Fixing convergence of Gaussian belief propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K; Bickson, Danny; Dolev, Danny
Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm ismore » linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanari's linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.« less
NASA Astrophysics Data System (ADS)
Bukoski, Alex; Steyn-Ross, D. A.; Pickett, Ashley F.; Steyn-Ross, Moira L.
2018-06-01
The dynamics of a stochastic type-I Hodgkin-Huxley-like point neuron model exposed to inhibitory synaptic noise are investigated as a function of distance from spiking threshold and the inhibitory influence of the general anesthetic agent propofol. The model is biologically motivated and includes the effects of intrinsic ion-channel noise via a stochastic differential equation description as well as inhibitory synaptic noise modeled as multiple Poisson-distributed impulse trains with saturating response functions. The effect of propofol on these synapses is incorporated through this drug's principal influence on fast inhibitory neurotransmission mediated by γ -aminobutyric acid (GABA) type-A receptors via reduction of the synaptic response decay rate. As the neuron model approaches spiking threshold from below, we track membrane voltage fluctuation statistics of numerically simulated stochastic trajectories. We find that for a given distance from spiking threshold, increasing the magnitude of anesthetic-induced inhibition is associated with augmented signatures of critical slowing: fluctuation amplitudes and correlation times grow as spectral power is increasingly focused at 0 Hz. Furthermore, as a function of distance from threshold, anesthesia significantly modifies the power-law exponents for variance and correlation time divergences observable in stochastic trajectories. Compared to the inverse square root power-law scaling of these quantities anticipated for the saddle-node bifurcation of type-I neurons in the absence of anesthesia, increasing anesthetic-induced inhibition results in an observable exponent <-0.5 for variance and >-0.5 for correlation time divergences. However, these behaviors eventually break down as distance from threshold goes to zero with both the variance and correlation time converging to common values independent of anesthesia. Compared to the case of no synaptic input, linearization of an approximating multivariate Ornstein-Uhlenbeck model reveals these effects to be the consequence of an additional slow eigenvalue associated with synaptic activity that competes with those of the underlying point neuron in a manner that depends on distance from spiking threshold.
NASA Astrophysics Data System (ADS)
Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah
2018-04-01
This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.
Space-Time Discrete KPZ Equation
NASA Astrophysics Data System (ADS)
Cannizzaro, G.; Matetski, K.
2018-03-01
We study a general family of space-time discretizations of the KPZ equation and show that they converge to its solution. The approach we follow makes use of basic elements of the theory of regularity structures (Hairer in Invent Math 198(2):269-504, 2014) as well as its discrete counterpart (Hairer and Matetski in Discretizations of rough stochastic PDEs, 2015. arXiv:1511.06937). Since the discretization is in both space and time and we allow non-standard discretization for the product, the methods mentioned above have to be suitably modified in order to accommodate the structure of the models under study.
Approximate dynamic programming for optimal stationary control with control-dependent noise.
Jiang, Yu; Jiang, Zhong-Ping
2011-12-01
This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrez, Loujaine; Ghanem, Roger; McAuliffe, Colin
multiscale framework to construct stochastic macroscopic constitutive material models is proposed. A spectral projection approach, specifically polynomial chaos expansion, has been used to construct explicit functional relationships between the homogenized properties and input parameters from finer scales. A homogenization engine embedded in Multiscale Designer, software for composite materials, has been used for the upscaling process. The framework is demonstrated using non-crimp fabric composite materials by constructing probabilistic models of the homogenized properties of a non-crimp fabric laminate in terms of the input parameters together with the homogenized properties from finer scales.
Performance study of LMS based adaptive algorithms for unknown system identification
NASA Astrophysics Data System (ADS)
Javed, Shazia; Ahmad, Noor Atinah
2014-07-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Performance study of LMS based adaptive algorithms for unknown system identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javed, Shazia; Ahmad, Noor Atinah
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less
NASA Astrophysics Data System (ADS)
Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.
1999-04-01
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.
Stochastic Processes in Physics: Deterministic Origins and Control
NASA Astrophysics Data System (ADS)
Demers, Jeffery
Stochastic processes are ubiquitous in the physical sciences and engineering. While often used to model imperfections and experimental uncertainties in the macroscopic world, stochastic processes can attain deeper physical significance when used to model the seemingly random and chaotic nature of the underlying microscopic world. Nowhere more prevalent is this notion than in the field of stochastic thermodynamics - a modern systematic framework used describe mesoscale systems in strongly fluctuating thermal environments which has revolutionized our understanding of, for example, molecular motors, DNA replication, far-from equilibrium systems, and the laws of macroscopic thermodynamics as they apply to the mesoscopic world. With progress, however, come further challenges and deeper questions, most notably in the thermodynamics of information processing and feedback control. Here it is becoming increasingly apparent that, due to divergences and subtleties of interpretation, the deterministic foundations of the stochastic processes themselves must be explored and understood. This thesis presents a survey of stochastic processes in physical systems, the deterministic origins of their emergence, and the subtleties associated with controlling them. First, we study time-dependent billiards in the quivering limit - a limit where a billiard system is indistinguishable from a stochastic system, and where the simplified stochastic system allows us to view issues associated with deterministic time-dependent billiards in a new light and address some long-standing problems. Then, we embark on an exploration of the deterministic microscopic Hamiltonian foundations of non-equilibrium thermodynamics, and we find that important results from mesoscopic stochastic thermodynamics have simple microscopic origins which would not be apparent without the benefit of both the micro and meso perspectives. Finally, we study the problem of stabilizing a stochastic Brownian particle with feedback control, and we find that in order to avoid paradoxes involving the first law of thermodynamics, we need a model for the fine details of the thermal driving noise. The underlying theme of this thesis is the argument that the deterministic microscopic perspective and stochastic mesoscopic perspective are both important and useful, and when used together, we can more deeply and satisfyingly understand the physics occurring over either scale.
Algebraic methods in system theory
NASA Technical Reports Server (NTRS)
Brockett, R. W.; Willems, J. C.; Willsky, A. S.
1975-01-01
Investigations on problems of the type which arise in the control of switched electrical networks are reported. The main results concern the algebraic structure and stochastic aspects of these systems. Future reports will contain more detailed applications of these results to engineering studies.
Distributional Monte Carlo Methods for the Boltzmann Equation
2013-03-01
Presented to the Faculty Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command...Interim Dean, Graduate School of Engineering and Management 8 Mar 2013 Date AFIT-ENC-DS-13-M-06 Abstract Stochastic particle methods (SPMs) for the...applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver
NASA Astrophysics Data System (ADS)
Turnquist, Brian; Owkes, Mark
2016-11-01
Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.
NASA Astrophysics Data System (ADS)
Dimitriadis, Panayiotis; Lazaros, Lappas; Daskalou, Olympia; Filippidou, Ariadni; Giannakou, Marianna; Gkova, Eleni; Ioannidis, Romanos; Polydera, Angeliki; Polymerou, Eleni; Psarrou, Eleftheria; Vyrini, Alexandra; Papalexiou, Simon; Koutsoyiannis, Demetris
2015-04-01
Several methods exist for estimating the statistical properties of wind speed, most of them being deterministic or probabilistic, disregarding though its long-term behaviour. Here, we focus on the stochastic nature of wind. After analyzing several historical timeseries at the area of interest (AoI) in Thessaly (Greece), we show that a Hurst-Kolmogorov (HK) behaviour is apparent. Thus, disregarding the latter could lead to unrealistic predictions and wind load situations, causing some impact on the energy production and management. Moreover, we construct a stochastic model capable of preserving the HK behaviour and we produce synthetic timeseries using a Monte-Carlo approach to estimate the future wind loads in the AoI. Finally, we identify the appropriate types of wind turbines for the AoI (based on the IEC 61400 standards) and propose several industrial solutions. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaul, Brian C; Wagner, Robert M; Green Jr, Johney Boyd
2013-01-01
Operation of spark-ignition (SI) engines with high levels of charge dilution through exhaust gas recirculation (EGR) achieves significant engine efficiency gains while maintaining stoichiometric operation for compatibility with three-way catalysts. Dilution levels, however, are limited by cyclic variability-including significant numbers of misfires-that becomes more pronounced with increasing dilution. This variability has been shown to have both stochastic and deterministic components. Stochastic effects include turbulence, mixing variations, and the like, while the deterministic effect is primarily due to the nonlinear dependence of flame propagation rates and ignition characteristics on the charge composition, which is influenced by the composition of residual gasesmore » from prior cycles. The presence of determinism implies that an increased understanding the dynamics of such systems could lead to effective control approaches that allow operation near the edge of stability, effectively extending the dilution limit. This nonlinear dependence has been characterized previously for homogeneous charge, port fuel-injected (PFI) SI engines operating fuel-lean as well as with inert diluents such as bottled N2 gas. In this paper, cyclic dispersion in a modern boosted gasoline direct injection (GDI) engine using a cooled external EGR loop is examined, and the potential for improvement with effective control is evaluated through the use of symbol sequence statistics and other techniques from chaos theory. Observations related to the potential implications of these results for control approaches that could effectively enable engine operation at the edge of combustion stability are noted.« less
Evaluation of Uncertainty in Runoff Analysis Incorporating Theory of Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshimi, Kazuhiro; Wang, Chao-Wen; Yamada, Tadashi
2015-04-01
The aim of this paper is to provide a theoretical framework of uncertainty estimate on rainfall-runoff analysis based on theory of stochastic process. SDE (stochastic differential equation) based on this theory has been widely used in the field of mathematical finance due to predict stock price movement. Meanwhile, some researchers in the field of civil engineering have investigated by using this knowledge about SDE (stochastic differential equation) (e.g. Kurino et.al, 1999; Higashino and Kanda, 2001). However, there have been no studies about evaluation of uncertainty in runoff phenomenon based on comparisons between SDE (stochastic differential equation) and Fokker-Planck equation. The Fokker-Planck equation is a partial differential equation that describes the temporal variation of PDF (probability density function), and there is evidence to suggest that SDEs and Fokker-Planck equations are equivalent mathematically. In this paper, therefore, the uncertainty of discharge on the uncertainty of rainfall is explained theoretically and mathematically by introduction of theory of stochastic process. The lumped rainfall-runoff model is represented by SDE (stochastic differential equation) due to describe it as difference formula, because the temporal variation of rainfall is expressed by its average plus deviation, which is approximated by Gaussian distribution. This is attributed to the observed rainfall by rain-gauge station and radar rain-gauge system. As a result, this paper has shown that it is possible to evaluate the uncertainty of discharge by using the relationship between SDE (stochastic differential equation) and Fokker-Planck equation. Moreover, the results of this study show that the uncertainty of discharge increases as rainfall intensity rises and non-linearity about resistance grows strong. These results are clarified by PDFs (probability density function) that satisfy Fokker-Planck equation about discharge. It means the reasonable discharge can be estimated based on the theory of stochastic processes, and it can be applied to the probabilistic risk of flood management.
Salis, Howard; Kaznessis, Yiannis N
2005-12-01
Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.
Expo IGNITES Interest in Manufacturing Careers
ERIC Educational Resources Information Center
Wilhelm, Karen
2009-01-01
On a pleasant September day, 400 high school students and 40 teachers converged on the Careers in Technology, Engineering, and Manufacturing Day at the IGNITE manufacturing industry trade show, held in Grand Rapids, Michigan, and sponsored by the Society of Manufacturing Engineers (SME). These weren't students getting out of school for a day to go…
Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines
NASA Astrophysics Data System (ADS)
Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin
2018-03-01
In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.
Numerical investigation of two- and three-dimensional heat transfer in expander cycle engines
NASA Technical Reports Server (NTRS)
Burch, Robert L.; Cheung, Fan-Bill
1993-01-01
The concept of using tube canting for enhancing the hot-side convective heat transfer in a cross-stream tubular rocket combustion chamber is evaluated using a CFD technique in this study. The heat transfer at the combustor wall is determined from the flow field generated by a modified version of the PARC Navier-Stokes Code, using the actual dimensions, fluid properties, and design parameters of a split-expander demonstrator cycle engine. The effects of artificial dissipation on convergence and solution accuracy are investigated. Heat transfer results predicted by the code are presented. The use of CFD in heat transfer calculations is critically examined to demonstrate the care needed in the use of artificial dissipation for good convergence and accurate solutions.
Gender in Science and Engineering Faculties: Demographic Inertia Revisited.
Thomas, Nicole R; Poole, Daniel J; Herbers, Joan M
2015-01-01
The under-representation of women on faculties of science and engineering is ascribed in part to demographic inertia, which is the lag between retirement of current faculty and future hires. The assumption of demographic inertia implies that, given enough time, gender parity will be achieved. We examine that assumption via a semi-Markov model to predict the future faculty, with simulations that predict the convergence demographic state. Our model shows that existing practices that produce gender gaps in recruitment, retention, and career progression preclude eventual gender parity. Further, we examine sensitivity of the convergence state to current gender gaps to show that all sources of disparity across the entire faculty career must be erased to produce parity: we cannot blame demographic inertia.
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Fotia, Matthew L.; Hoke, John; Schauer, Fred
2015-01-01
A quasi-two-dimensional, computational fluid dynamic (CFD) simulation of a rotating detonation engine (RDE) is described. The simulation operates in the detonation frame of reference and utilizes a relatively coarse grid such that only the essential primary flow field structure is captured. This construction and other simplifications yield rapidly converging, steady solutions. Viscous effects, and heat transfer effects are modeled using source terms. The effects of potential inlet flow reversals are modeled using boundary conditions. Results from the simulation are compared to measured data from an experimental RDE rig with a converging-diverging nozzle added. The comparison is favorable for the two operating points examined. The utility of the code as a performance optimization tool and a diagnostic tool are discussed.
A Stochastic Model of Eye Lens Growth
Šikić, Hrvoje; Shi, Yanrong; Lubura, Snježana; Bassnett, Steven
2015-01-01
The size and shape of the ocular lens must be controlled with precision if light is to be focused sharply on the retina. The lifelong growth of the lens depends on the production of cells in the anterior epithelium. At the lens equator, epithelial cells differentiate into fiber cells, which are added to the surface of the existing fiber cell mass, increasing its volume and area. We developed a stochastic model relating the rates of cell proliferation and death in various regions of the lens epithelium to deposition of fiber cells and lens growth. Epithelial population dynamics were modeled as a branching process with emigration and immigration between various proliferative zones. Numerical simulations were in agreement with empirical measurements and demonstrated that, operating within the strict confines of lens geometry, a stochastic growth engine can produce the smooth and precise growth necessary for lens function. PMID:25816743
Stochastic sensitivity measure for mistuned high-performance turbines
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Pierre, Christophe
1992-01-01
A stochastic measure of sensitivity is developed in order to predict the effects of small random blade mistuning on the dynamic aeroelastic response of turbomachinery blade assemblies. This sensitivity measure is based solely on the nominal system design (i.e., on tuned system information), which makes it extremely easy and inexpensive to calculate. The measure has the potential to become a valuable design tool that will enable designers to evaluate mistuning effects at a preliminary design stage and thus assess the need for a full mistuned rotor analysis. The predictive capability of the sensitivity measure is illustrated by examining the effects of mistuning on the aeroelastic modes of the first stage of the oxidizer turbopump in the Space Shuttle Main Engine. Results from a full analysis mistuned systems confirm that the simple stochastic sensitivity measure predicts consistently the drastic changes due to misturning and the localization of aeroelastic vibration to a few blades.
Study on Stationarity of Random Load Spectrum Based on the Special Road
NASA Astrophysics Data System (ADS)
Yan, Huawen; Zhang, Weigong; Wang, Dong
2017-09-01
In the special road quality assessment method, there is a method using a wheel force sensor, the essence of this method is collecting the load spectrum of the car to reflect the quality of road. According to the definition of stochastic process, it is easy to find that the load spectrum is a stochastic process. However, the analysis method and application range of different random processes are very different, especially in engineering practice, which will directly affect the design and development of the experiment. Therefore, determining the type of a random process has important practical significance. Based on the analysis of the digital characteristics of road load spectrum, this paper determines that the road load spectrum in this experiment belongs to a stationary stochastic process, paving the way for the follow-up modeling and feature extraction of the special road.
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.
2016-01-01
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Drawert, Brian; Hellander, Andreas; Bales, Ben; ...
2016-12-08
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less
NASA Astrophysics Data System (ADS)
Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.
2016-12-01
The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.
NASA Astrophysics Data System (ADS)
Lee, H.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.
2017-12-01
Cloud radar Doppler spectra provide rich information for evaluating the fidelity of particle size distributions from cloud models. The intrinsic simplifications of bulk microphysics schemes generally preclude the generation of plausible Doppler spectra, unlike bin microphysics schemes, which develop particle size distributions more organically at substantial computational expense. However, bin microphysics schemes face the difficulty of numerical diffusion leading to overly rapid large drop formation, particularly while solving the stochastic collection equation (SCE). Because such numerical diffusion can cause an even greater overestimation of radar reflectivity, an accurate method for solving the SCE is essential for bin microphysics schemes to accurately simulate Doppler spectra. While several methods have been proposed to solve the SCE, here we examine those of Berry and Reinhardt (1974, BR74), Jacobson et al. (1994, J94), and Bott (2000, B00). Using a simple box model to simulate drop size distribution evolution during precipitation formation with a realistic kernel, it is shown that each method yields a converged solution as the resolution of the drop size grid increases. However, the BR74 and B00 methods yield nearly identical size distributions in time, whereas the J94 method produces consistently larger drops throughout the simulation. In contrast to an earlier study, the performance of the B00 method is found to be satisfactory; it converges at relatively low resolution and long time steps, and its computational efficiency is the best among the three methods considered here. Finally, a series of idealized stratocumulus large-eddy simulations are performed using the J94 and B00 methods. The reflectivity size distributions and Doppler spectra obtained from the different SCE solution methods are presented and compared with observations.
Recent progresses in gene delivery-based bone tissue engineering.
Lu, Chia-Hsin; Chang, Yu-Han; Lin, Shih-Yeh; Li, Kuei-Chang; Hu, Yu-Chen
2013-12-01
Gene therapy has converged with bone engineering over the past decade, by which a variety of therapeutic genes have been delivered to stimulate bone repair. These genes can be administered via in vivo or ex vivo approach using either viral or nonviral vectors. This article reviews the fundamental aspects and recent progresses in the gene therapy-based bone engineering, with emphasis on the new genes, viral vectors and gene delivery approaches. © 2013.
2011-02-25
fast method of predicting the number of iterations needed for converged results. A new hybrid technique is proposed to predict the convergence history...interchanging between the modes, whereas a smaller veering (or crossing) region shows fast mode switching. Then, the nonlinear vibration re- sponse of the...problems of interest involve dynamic ( fast ) crack propagation, then the nodes selected by the proposed approach at some time instant might not
Performance of a supercharged direct-injection stratified-charge rotary combustion engine
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1990-01-01
A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.
A short course on measure and probability theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre
2004-02-01
This brief Introduction to Measure Theory, and its applications to Probabilities, corresponds to the lecture notes of a seminar series given at Sandia National Laboratories in Livermore, during the spring of 2003. The goal of these seminars was to provide a minimal background to Computational Combustion scientists interested in using more advanced stochastic concepts and methods, e.g., in the context of uncertainty quantification. Indeed, most mechanical engineering curricula do not provide students with formal training in the field of probability, and even in less in measure theory. However, stochastic methods have been used more and more extensively in the pastmore » decade, and have provided more successful computational tools. Scientists at the Combustion Research Facility of Sandia National Laboratories have been using computational stochastic methods for years. Addressing more and more complex applications, and facing difficult problems that arose in applications showed the need for a better understanding of theoretical foundations. This is why the seminar series was launched, and these notes summarize most of the concepts which have been discussed. The goal of the seminars was to bring a group of mechanical engineers and computational combustion scientists to a full understanding of N. WIENER'S polynomial chaos theory. Therefore, these lectures notes are built along those lines, and are not intended to be exhaustive. In particular, the author welcomes any comments or criticisms.« less
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin
Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.
Material Targets for Scaling All-Spin Logic
NASA Astrophysics Data System (ADS)
Manipatruni, Sasikanth; Nikonov, Dmitri E.; Young, Ian A.
2016-01-01
All-spin-logic devices are promising candidates to augment and complement beyond-CMOS integrated circuit computing due to nonvolatility, ultralow operating voltages, higher logical efficiency, and high density integration. However, the path to reach lower energy-delay product performance compared to CMOS transistors currently is not clear. We show that scaling and engineering the nanoscale magnetic materials and interfaces is the key to realizing spin-logic devices that can surpass the energy-delay performance of CMOS transistors. With validated stochastic nanomagnetic and vector spin-transport numerical models, we derive the target material and interface properties for the nanomagnets and channels. We identify promising directions for material engineering and discovery focusing on the systematic scaling of magnetic anisotropy (Hk ) and saturation magnetization (Ms ), the use of perpendicular magnetic anisotropy, and the interface spin-mixing conductance of the ferromagnet-spin-channel interface (Gmix ). We provide systematic targets for scaling a spin-logic energy-delay product toward 2 aJ ns, comprehending the stochastic noise for nanomagnets.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Kantz, Holger
2016-04-01
As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).
Pricing of swing options: A Monte Carlo simulation approach
NASA Astrophysics Data System (ADS)
Leow, Kai-Siong
We study the problem of pricing swing options, a class of multiple early exercise options that are traded in energy market, particularly in the electricity and natural gas markets. These contracts permit the option holder to periodically exercise the right to trade a variable amount of energy with a counterparty, subject to local volumetric constraints. In addition, the total amount of energy traded from settlement to expiration with the counterparty is restricted by a global volumetric constraint. Violation of this global volumetric constraint is allowed but would lead to penalty settled at expiration. The pricing problem is formulated as a stochastic optimal control problem in discrete time and state space. We present a stochastic dynamic programming algorithm which is based on piecewise linear concave approximation of value functions. This algorithm yields the value of the swing option under the assumption that the optimal exercise policy is applied by the option holder. We present a proof of an almost sure convergence that the algorithm generates the optimal exercise strategy as the number of iterations approaches to infinity. Finally, we provide a numerical example for pricing a natural gas swing call option.
NASA Astrophysics Data System (ADS)
Kadum, Hawwa; Ali, Naseem; Cal, Raúl
2016-11-01
Hot-wire anemometry measurements have been performed on a 3 x 3 wind turbine array to study the multifractality of the turbulent kinetic energy dissipations. A multifractal spectrum and Hurst exponents are determined at nine locations downstream of the hub height, and bottom and top tips. Higher multifractality is found at 0.5D and 1D downstream of the bottom tip and hub height. The second order of the Hurst exponent and combination factor show an ability to predict the flow state in terms of its development. Snapshot proper orthogonal decomposition is used to identify the coherent and incoherent structures and to reconstruct the stochastic velocity using a specific number of the POD eigenfunctions. The accumulation of the turbulent kinetic energy in top tip location exhibits fast convergence compared to the bottom tip and hub height locations. The dissipation of the large and small scales are determined using the reconstructed stochastic velocities. The higher multifractality is shown in the dissipation of the large scale compared to small-scale dissipation showing consistency with the behavior of the original signals.
Gene selection heuristic algorithm for nutrigenomics studies.
Valour, D; Hue, I; Grimard, B; Valour, B
2013-07-15
Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.
Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.
Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen
2013-02-01
In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.
Benedek, C; Descombes, X; Zerubia, J
2012-01-01
In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.
Kermajani, Hamidreza; Gomez, Carles
2014-01-01
The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs. PMID:25004154
Kermajani, Hamidreza; Gomez, Carles
2014-07-07
The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs.
Extended forms of the second law for general time-dependent stochastic processes.
Ge, Hao
2009-08-01
The second law of thermodynamics represents a universal principle applicable to all natural processes, physical systems, and engineering devices. Hatano and Sasa have recently put forward an extended form of the second law for transitions between nonequilibrium stationary states [Phys. Rev. Lett. 86, 3463 (2001)]. In this paper we further extend this form to an instantaneous interpretation, which is satisfied by quite general time-dependent stochastic processes including master-equation models and Langevin dynamics without the requirements of the stationarity for the initial and final states. The theory is applied to several thermodynamic processes, and its consistence with the classical thermodynamics is shown.
Convergence in Underwater Swimming Between Nature and Engineering
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Promode R.; Boller, Michael
2004-11-01
We are interested in comparing the hydrodynamic performance of underwater vehicles and swimming animals which are believed to have been optimized via evolution. Cruising and maneuvering are treated separately. Platforms like submarines are primarily cruising vehicles, while torpedoes are dexterous in both. In swimming animals, generally, red muscle is used for cruising while white muscle is used for maneuvering motions. Data from literature is examined comparing shaft/muscle power versus displacement. Experiments also have been carried out with captive mackerel and bluefish that are known to be open water fish and are proficient in both cruising and maneuvering. Their trajectories around obstacles have been recorded and analyzed. Similar figure of eight' maneuvering trajectory data of engineering underwater vehicles have also been analyzed. It is shown that there is convergence between nature and engineering in cruising that extend over eight decades of variation in power and displacement. However, swimming animals are still more proficient in maneuvering, although the gap has been closing of late.
Toward a convergence of regenerative medicine, rehabilitation, and neuroprosthetics.
Aravamudhan, Shyam; Bellamkonda, Ravi V
2011-11-01
No effective therapeutic interventions exist for severe neural pathologies, despite significant advances in regenerative medicine, rehabilitation, and neuroprosthetics. Our current hypothesis is that a specific combination of tissue engineering, pharmacology, cell replacement, drug delivery, and electrical stimulation, together with plasticity-promoting and locomotor training (neurorehabilitation) is necessary to interact synergistically in order to activate and enable all damaged circuits. We postulate that various convergent themes exist among the different therapeutic fields. Therefore, the objective of this review is to highlight the convergent themes, which we believe have a common goal of restoring function after neural damage. The convergent themes discussed in this review include modulation of inflammation and secondary damage, encouraging endogenous repair/regeneration (using scaffolds, cell transplantation, and drug delivery), application of electrical fields to modulate healing and/or activity, and finally modulation of plasticity.
2014-10-06
to a subset Θ̃ of `-dimensional Euclidean space. The sub-σ-algebra Fn = FXn = σ(X n 1 ) of F is generated by the stochastic process X n 1 = (X1...developed asymptotic hypothesis testing theory is based on the SLLN and rates of convergence in the strong law for the LLR processes , specifically by...ξn to C. Write λn(θ, θ̃) = log dPnθ dPn θ̃ = ∑n k=1 log pθ(Xk|Xk−11 ) pθ̃(Xk|X k−1 1 ) for the log-likelihood ratio (LLR) process . Assume that there
NASA Astrophysics Data System (ADS)
Cheng, Longjiu; Cai, Wensheng; Shao, Xueguang
2005-03-01
An energy-based perturbation and a new idea of taboo strategy are proposed for structural optimization and applied in a benchmark problem, i.e., the optimization of Lennard-Jones (LJ) clusters. It is proved that the energy-based perturbation is much better than the traditional random perturbation both in convergence speed and searching ability when it is combined with a simple greedy method. By tabooing the most wide-spread funnel instead of the visited solutions, the hit rate of other funnels can be significantly improved. Global minima of (LJ) clusters up to 200 atoms are found with high efficiency.
The Calderón problem with corrupted data
NASA Astrophysics Data System (ADS)
Caro, Pedro; Garcia, Andoni
2017-08-01
We consider the inverse Calderón problem consisting of determining the conductivity inside a medium by electrical measurements on its surface. Ideally, these measurements determine the Dirichlet-to-Neumann map and, therefore, one usually assumes the data to be given by such a map. This situation corresponds to having access to infinite-precision measurements, which is totally unrealistic. In this paper, we study the Calderón problem assuming the data to contain measurement errors and provide formulas to reconstruct the conductivity and its normal derivative on the surface. Additionally, we state the rate convergence of the method. Our approach is theoretical and has a stochastic flavour.
A Stochastic Mixing Model for Predicting Emissions in a Direct Injection Diesel Engine.
1986-09-01
of chemical reactors. The fundamental concept of these models is coalescence/dis- persion micromixing . C1] Details of this method are provided in Appen...Togby,A.H., "Monte Carlo Methods of Simulating Micromixing in Chemical Reactors", Chemical Engineering Science, Vol.27, p.1 4 97, 1972. 46. Kattan,A...on a molecular level. 2. Micromixing or stream mixing refers to the mixing of particles on a molecular level. Until the coalescence and dispersion
NASA Technical Reports Server (NTRS)
Capone, Francis J.; Mason, Mary L.; Leavitt, Laurence D.
1990-01-01
An investigation was conducted in the Langley 16-Foot Transonic Tunnel to determine thrust vectoring capability of subscale 2-D convergent-divergent exhaust nozzles installed on a twin engine general research fighter model. Pitch thrust vectoring was accomplished by downward rotation of nozzle upper and lower flaps. The effects of nozzle sidewall cutback were studied for both unvectored and pitch vectored nozzles. A single cutback sidewall was employed for yaw thrust vectoring. This investigation was conducted at Mach numbers ranging from 0 to 1.20 and at angles of attack from -2 to 35 deg. High pressure air was used to simulate jet exhaust and provide values of nozzle pressure ratio up to 9.
IR signature study of aircraft engine for variation in nozzle exit area
NASA Astrophysics Data System (ADS)
Baranwal, Nidhi; Mahulikar, Shripad P.
2016-01-01
In general, jet engines operate with choked nozzle during take-off, climb and cruise, whereas unchoking occurs while landing and taxiing (when engine is not running at full power). Appropriate thrust in an aircraft in all stages of the flight, i.e., take-off, climb, cruise, descent and landing is achieved through variation in the nozzle exit area. This paper describes the effect on thrust and IR radiance of a turbojet engine due to variation in the exit area of a just choked converging nozzle (Me = 1). The variations in the nozzle exit area result in either choking or unchoking of a just choked converging nozzle. Results for the change in nozzle exit area are analyzed in terms of thrust, mass flow rate and specific fuel consumption. The solid angle subtended (Ω) by the exhaust system is estimated analytically, for the variation in nozzle exit area (Ane), as it affects the visibility of the hot engine parts from the rear aspect. For constant design point thrust, IR radiance is studied from the boresight (ϕ = 0°, directly from the rear side) for various percentage changes in nozzle exit area (%ΔAne), in the 1.9-2.9 μm and 3-5 μm bands.
Novikov Engine with Fluctuating Heat Bath Temperature
NASA Astrophysics Data System (ADS)
Schwalbe, Karsten; Hoffmann, Karl Heinz
2018-04-01
The Novikov engine is a model for heat engines that takes the irreversible character of heat fluxes into account. Using this model, the maximum power output as well as the corresponding efficiency of the heat engine can be deduced, leading to the well-known Curzon-Ahlborn efficiency. The classical model assumes constant heat bath temperatures, which is not a reasonable assumption in the case of fluctuating heat sources. Therefore, in this article the influence of stochastic fluctuations of the hot heat bath's temperature on the optimal performance measures is investigated. For this purpose, a Novikov engine with fluctuating heat bath temperature is considered. Doing so, a generalization of the Curzon-Ahlborn efficiency is found. The results can help to quantify how the distribution of fluctuating quantities affects the performance measures of power plants.
An extended stochastic method for seismic hazard estimation
NASA Astrophysics Data System (ADS)
Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.
2015-12-01
In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.
Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats
2014-05-01
In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Origin scenarios for the Kepler 36 planetary system
NASA Astrophysics Data System (ADS)
Quillen, Alice C.; Bodman, Eva; Moore, Alexander
2013-11-01
We explore scenarios for the origin of two different density planets in the Kepler 36 system in adjacent orbits near the 7:6 mean motion resonance. We find that fine tuning is required in the stochastic forcing amplitude, the migration rate and planet eccentricities to allow two convergently migrating planets to bypass mean motion resonances such as the 4:3, 5:4 and 6:5, and yet allow capture into the 7:6 resonance. Stochastic forcing can eject the system from resonance causing a collision between the planets, unless the disc causing migration and stochastic forcing is depleted soon after resonance capture. We explore a scenario with approximately Mars mass embryos originating exterior to the two planets and migrating inwards towards two planets. We find that gravitational interactions with embryos can nudge the system out of resonances. Numerical integrations with about a half dozen embryos can leave the two planets in the 7:6 resonance. Collisions between planets and embryos have a wide distribution of impact angles and velocities ranging from accretionary to disruptive. We find that impacts can occur at sufficiently high impact angle and velocity that the envelope of a planet could have been stripped, leaving behind a dense core. Some of our integrations show the two planets exchanging locations, allowing the outer planet that had experienced multiple collisions with embryos to become the innermost planet. A scenario involving gravitational interactions and collisions with embryos may account for both the proximity of the Kepler 36 planets and their large density contrast.
Tilles, Paulo F C; Petrovskii, Sergei V
2016-07-01
Patterns of individual animal movement have been a focus of considerable attention recently. Of particular interest is a question how different macroscopic properties of animal dispersal result from the stochastic processes occurring on the microscale of the individual behavior. In this paper, we perform a comprehensive analytical study of a model where the animal changes the movement velocity as a result of its behavioral response to environmental stochasticity. The stochasticity is assumed to manifest itself through certain signals, and the animal modifies its velocity as a response to the signals. We consider two different cases, i.e. where the change in the velocity is or is not correlated to its current value. We show that in both cases the early, transient stage of the animal movement is super-diffusive, i.e. ballistic. The large-time asymptotic behavior appears to be diffusive in the uncorrelated case but super-ballistic in the correlated case. We also calculate analytically the dispersal kernel of the movement and show that, whilst it converge to a normal distribution in the large-time limit, it possesses a fatter tail during the transient stage, i.e. at early and intermediate time. Since the transients are known to be highly relevant in ecology, our findings may indicate that the fat tails and superdiffusive spread that are sometimes observed in the movement data may be a feature of the transitional dynamics rather than an inherent property of the animal movement.
NASA Astrophysics Data System (ADS)
Bastani, Ali Foroush; Dastgerdi, Maryam Vahid; Mighani, Abolfazl
2018-06-01
The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63-85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423-438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.
Design of Beneficial Wave Dynamics for Engine Life and Operability Enhancement
2010-07-30
ST^(A), where S is the Dirac delta measure. Stochastic transition 9 function can be used to define two linear transfer operators called as Perron ... Frobenius and Koopman operators. Here we consider the finite dimensional approximation of the P-F operator. To do this we consider the finite
ERIC Educational Resources Information Center
Noble, Dorottya B.; Mochrie, Simon G. J.; O'Hern, Corey S.; Pollard, Thomas D.; Regan, Lynne
2016-01-01
In 2008, we established the Integrated Graduate Program in Physical and Engineering Biology (IGPPEB) at Yale University. Our goal was to create a comprehensive graduate program to train a new generation of scientists who possess a sophisticated understanding of biology and who are capable of applying physical and quantitative methodologies to…
Molecular Bases of cyclodextrin Adapter Interactions with Engineered Protein Nanopores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, A.; Mikhailova, E; Cheley, S
2010-01-01
Engineered protein pores have several potential applications in biotechnology: as sensor elements in stochastic detection and ultrarapid DNA sequencing, as nanoreactors to observe single-molecule chemistry, and in the construction of nano- and micro-devices. One important class of pores contains molecular adapters, which provide internal binding sites for small molecules. Mutants of the {alpha}-hemolysin ({alpha}HL) pore that bind the adapter {beta}-cyclodextrin ({beta}CD) {approx}10{sup 4} times more tightly than the wild type have been obtained. We now use single-channel electrical recording, protein engineering including unnatural amino acid mutagenesis, and high-resolution x-ray crystallography to provide definitive structural information on these engineered protein nanoporesmore » in unparalleled detail.« less
Escalated convergent artificial bee colony
NASA Astrophysics Data System (ADS)
Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu
2016-03-01
Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks.
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-13
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs' demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks †
Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao
2017-01-01
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays. PMID:28098750
[Network of plastic neurons capable of forming conditioned reflexes ("membrane" model of learning)].
Litvinov, E G; Frolov, A A
1978-01-01
Simple net neuronal model was suggested which was able to form the conditioning due to changes of the neuron excitability. The model was based on the following main concepts: (a) the conditioning formation should result in reduction of the firing threshold in the same neurons where the conditioning and reinforcement stimuli were converged, (b) neuron threshold may have only two possible states: initial and final ones, these were identical for all cells, the threshold may be changed only once from the initial value to the final one, (c) isomorphous relation may be introduced between some pair of arbitrary stimuli and some subset of the net neurons; any two pairs differing at least in one stimulus have unlike subsets of the convergent neurons. Stochastically organized neuronal net was used for analysis of the model. Considerable information capacity of the net gives the opportunity to consider that the conditioning formation is possible on the basis of the nervous cells. The efficienty of the model turn out to be comparable with the well known models where the conditioning formation was due to the modification of the synapses.
Statistical steady states in turbulent droplet condensation
NASA Astrophysics Data System (ADS)
Bec, Jeremie; Krstulovic, Giorgio; Siewert, Christoph
2017-11-01
We investigate the general problem of turbulent condensation. Using direct numerical simulations we show that the fluctuations of the supersaturation field offer different conditions for the growth of droplets which evolve in time due to turbulent transport and mixing. This leads to propose a Lagrangian stochastic model consisting of a set of integro-differential equations for the joint evolution of the squared radius and the supersaturation along droplet trajectories. The model has two parameters fixed by the total amount of water and the thermodynamic properties, as well as the Lagrangian integral timescale of the turbulent supersaturation. The model reproduces very well the droplet size distributions obtained from direct numerical simulations and their time evolution. A noticeable result is that, after a stage where the squared radius simply diffuses, the system converges exponentially fast to a statistical steady state independent of the initial conditions. The main mechanism involved in this convergence is a loss of memory induced by a significant number of droplets undergoing a complete evaporation before growing again. The statistical steady state is characterised by an exponential tail in the droplet mass distribution.
Convergence studies in meshfree peridynamic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seleson, Pablo; Littlewood, David J.
2016-04-15
Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Hourly temporal distribution of wind
NASA Astrophysics Data System (ADS)
Deligiannis, Ilias; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris
2016-04-01
The wind process is essential for hydrometeorology and additionally, is one of the basic renewable energy resources. Most stochastic forecast models are limited up to daily scales disregarding the hourly scale which is significant for renewable energy management. Here, we analyze hourly wind timeseries giving emphasis on the temporal distribution of wind within the day. We finally present a periodic model based on statistical as well as hydrometeorological reasoning that shows good agreement with data. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Lai, Zhi-Hui; Leng, Yong-Gang
2015-08-28
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.
NASA Astrophysics Data System (ADS)
Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher
2017-11-01
Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.
NASA Astrophysics Data System (ADS)
Lauterbach, S.; Fina, M.; Wagner, W.
2018-04-01
Since structural engineering requires highly developed and optimized structures, the thickness dependency is one of the most controversially debated topics. This paper deals with stability analysis of lightweight thin structures combined with arbitrary geometrical imperfections. Generally known design guidelines only consider imperfections for simple shapes and loading, whereas for complex structures the lower-bound design philosophy still holds. Herein, uncertainties are considered with an empirical knockdown factor representing a lower bound of existing measurements. To fully understand and predict expected bearable loads, numerical investigations are essential, including geometrical imperfections. These are implemented into a stand-alone program code with a stochastic approach to compute random fields as geometric imperfections that are applied to nodes of the finite element mesh of selected structural examples. The stochastic approach uses the Karhunen-Loève expansion for the random field discretization. For this approach, the so-called correlation length l_c controls the random field in a powerful way. This parameter has a major influence on the buckling shape, and also on the stability load. First, the impact of the correlation length is studied for simple structures. Second, since most structures for engineering devices are more complex and combined structures, these are intensively discussed with the focus on constrained random fields for e.g. flange-web-intersections. Specific constraints for those random fields are pointed out with regard to the finite element model. Further, geometrical imperfections vanish where the structure is supported.
On the convergence of the coupled-wave approach for lamellar diffraction gratings
NASA Technical Reports Server (NTRS)
Li, Lifeng; Haggans, Charles W.
1992-01-01
Among the many existing rigorous methods for analyzing diffraction of electromagnetic waves by diffraction gratings, the coupled-wave approach stands out because of its versatility and simplicity. It can be applied to volume gratings and surface relief gratings, and its numerical implementation is much simpler than others. In addition, its predictions were experimentally validated in several cases. These facts explain the popularity of the coupled-wave approach among many optical engineers in the field of diffractive optics. However, a comprehensive analysis of the convergence of the model predictions has never been presented, although several authors have recently reported convergence difficulties with the model when it is used for metallic gratings in TM polarization. Herein, three points are made: (1) in the TM case, the coupled-wave approach converges much slower than the modal approach of Botten et al; (2) the slow convergence is caused by the use of Fourier expansions for the permittivity and the fields in the grating region; and (3) is manifested by the slow convergence of the eigenvalues and the associated modal fields. The reader is assumed to be familiar with the mathematical formulations of the coupled-wave approach and the modal approach.
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less
Milestones in Software Engineering and Knowledge Engineering History: A Comparative Review
del Águila, Isabel M.; Palma, José; Túnez, Samuel
2014-01-01
We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because “those who cannot remember the past are condemned to repeat it.” This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one. PMID:24624046
Milestones in software engineering and knowledge engineering history: a comparative review.
del Águila, Isabel M; Palma, José; Túnez, Samuel
2014-01-01
We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because "those who cannot remember the past are condemned to repeat it." This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dongsheng; Lavender, Curt
2015-05-08
Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less
Stochastic Geometric Network Models for Groups of Functional and Structural Connectomes
Friedman, Eric J.; Landsberg, Adam S.; Owen, Julia P.; Li, Yi-Ou; Mukherjee, Pratik
2014-01-01
Structural and functional connectomes are emerging as important instruments in the study of normal brain function and in the development of new biomarkers for a variety of brain disorders. In contrast to single-network studies that presently dominate the (non-connectome) network literature, connectome analyses typically examine groups of empirical networks and then compare these against standard (stochastic) network models. Current practice in connectome studies is to employ stochastic network models derived from social science and engineering contexts as the basis for the comparison. However, these are not necessarily best suited for the analysis of connectomes, which often contain groups of very closely related networks, such as occurs with a set of controls or a set of patients with a specific disorder. This paper studies important extensions of standard stochastic models that make them better adapted for analysis of connectomes, and develops new statistical fitting methodologies that account for inter-subject variations. The extensions explicitly incorporate geometric information about a network based on distances and inter/intra hemispherical asymmetries (to supplement ordinary degree-distribution information), and utilize a stochastic choice of networks' density levels (for fixed threshold networks) to better capture the variance in average connectivity among subjects. The new statistical tools introduced here allow one to compare groups of networks by matching both their average characteristics and the variations among them. A notable finding is that connectomes have high “smallworldness” beyond that arising from geometric and degree considerations alone. PMID:25067815
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
NASA Astrophysics Data System (ADS)
Doi, Akihiro; Hada, Kazuhiro; Kino, Motoki; Wajima, Kiyoaki; Nakahara, Satomi
2018-04-01
We report the discovery of a local convergence of a jet cross section in the quasi-stationary jet feature in the γ-ray-emitting narrow-line Seyfert 1 galaxy (NLS1) 1H 0323+342. The convergence site is located at ∼7 mas (corresponding to the order of 100 pc in deprojection) from the central engine. We also found limb-brightened jet structures at both the upstream and downstream of the convergence site. We propose that the quasi-stationary feature showing the jet convergence and limb-brightening occurs as a consequence of recollimation shock in the relativistic jets. The quasi-stationary feature is one of the possible γ-ray-emitting sites in this NLS1, in analogy with the HST-1 complex in the M87 jet. Monitoring observations have revealed that superluminal components passed through the convergence site and the peak intensity of the quasi-stationary feature, which showed apparent coincidences with the timing of observed γ-ray activities.
Convergence Science in a Nano World
Cady, Nathaniel
2013-01-01
Convergence is a new paradigm that brings together critical advances in the life sciences, physical sciences and engineering. Going beyond traditional “interdisciplinary” studies, “convergence” describes the culmination of truly integrated research and development, yielding revolutionary advances in both scientific research and new technologies. At its core, nanotechnology embodies these elements of convergence science by bringing together multiple disciplines with the goal of creating innovative and groundbreaking technologies. In the biological and biomedical sciences, nanotechnology research has resulted in dramatic improvements in sensors, diagnostics, imaging, and even therapeutics. In particular, there is a current push to examine the interface between the biological world and micro/nano-scale systems. For example, my laboratory is developing novel strategies for spatial patterning of biomolecules, electrical and optical biosensing, nanomaterial delivery systems, cellular patterning techniques, and the study of cellular interactions with nano-structured surfaces. In this seminar, I will give examples of how convergent research is being applied to three major areas of biological research &endash; cancer diagnostics, microbiology, and DNA-based biosensing. These topics will be presented as case studies, showing the benefits (and challenges) of multi-disciplinary, convergent research and development.
NASA Astrophysics Data System (ADS)
Jiao Ling, LIn; Xiaoli, Yin; Huan, Chang; Xiaozhou, Cui; Yi-Lin, Guo; Huan-Yu, Liao; Chun-YU, Gao; Guohua, Wu; Guang-Yao, Liu; Jin-KUn, Jiang; Qing-Hua, Tian
2018-02-01
Atmospheric turbulence limits the performance of orbital angular momentum-based free-space optical communication (FSO-OAM) system. In order to compensate phase distortion induced by atmospheric turbulence, wavefront sensorless adaptive optics (WSAO) has been proposed and studied in recent years. In this paper a new version of SPGD called MZ-SPGD, which combines the Z-SPGD based on the deformable mirror influence function and the M-SPGD based on the Zernike polynomials, is proposed. Numerical simulations show that the hybrid method decreases convergence times markedly but can achieve the same compensated effect compared to Z-SPGD and M-SPGD.
Distributed parameter estimation in unreliable sensor networks via broadcast gossip algorithms.
Wang, Huiwei; Liao, Xiaofeng; Wang, Zidong; Huang, Tingwen; Chen, Guo
2016-01-01
In this paper, we present an asynchronous algorithm to estimate the unknown parameter under an unreliable network which allows new sensors to join and old sensors to leave, and can tolerate link failures. Each sensor has access to partially informative measurements when it is awakened. In addition, the proposed algorithm can avoid the interference among messages and effectively reduce the accumulated measurement and quantization errors. Based on the theory of stochastic approximation, we prove that our proposed algorithm almost surely converges to the unknown parameter. Finally, we present a numerical example to assess the performance and the communication cost of the algorithm. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordström, Jan, E-mail: jan.nordstrom@liu.se; Wahlsten, Markus, E-mail: markus.wahlsten@liu.se
We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for themore » Euler equations.« less
Global dynamics of oscillator populations under common noise
NASA Astrophysics Data System (ADS)
Braun, W.; Pikovsky, A.; Matias, M. A.; Colet, P.
2012-07-01
Common noise acting on a population of identical oscillators can synchronize them. We develop a description of this process which is not limited to the states close to synchrony, but provides a global picture of the evolution of the ensembles. The theory is based on the Watanabe-Strogatz transformation, allowing us to obtain closed stochastic equations for the global variables. We show that at the initial stage, the order parameter grows linearly in time, while at the later stages the convergence to synchrony is exponentially fast. Furthermore, we extend the theory to nonidentical ensembles with the Lorentzian distribution of natural frequencies and determine the stationary values of the order parameter in dependence on driving noise and mismatch.
Assessing the Total Factor Productivity of Cotton Production in Egypt
Rodríguez, Xosé A.; Elasraag, Yahia H.
2015-01-01
The main objective of this paper is to decompose the productivity growth of Egyptian cotton production. We employ the stochastic frontier approach and decompose the changes in total factor productivity (CTFP) growth into four components: technical progress (TP), changes in scale component (CSC), changes in allocative efficiency (CAE), and changes in technical efficiency (CTE). Considering a situation of scarce statistical information, we propose four alternative empirical models, with the purpose of looking for convergence in the results. The results provide evidence that in this production system total productivity does not increase, which is mainly due to the negative average contributions of CAE and TP. Policy implications are offered in light of the results. PMID:25625318
Assessing the total factor productivity of cotton production in Egypt.
Rodríguez, Xosé A; Elasraag, Yahia H
2015-01-01
The main objective of this paper is to decompose the productivity growth of Egyptian cotton production. We employ the stochastic frontier approach and decompose the changes in total factor productivity (CTFP) growth into four components: technical progress (TP), changes in scale component (CSC), changes in allocative efficiency (CAE), and changes in technical efficiency (CTE). Considering a situation of scarce statistical information, we propose four alternative empirical models, with the purpose of looking for convergence in the results. The results provide evidence that in this production system total productivity does not increase, which is mainly due to the negative average contributions of CAE and TP. Policy implications are offered in light of the results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yongzheng, E-mail: yzsung@gmail.com; Li, Wang; Zhao, Donghua
In this paper, we propose a new consensus model in which the interactions among agents stochastically switch between attraction and repulsion. Such a positive-and-negative mechanism is described by the white-noise-based coupling. Analytic criteria for the consensus and non-consensus in terms of the eigenvalues of the noise intensity matrix are derived, which provide a better understanding of the constructive roles of random interactions. Specifically, we discover a positive role of noise coupling that noise can accelerate the emergence of consensus. We find that the converging speed of the multi-agent network depends on the square of the second smallest eigenvalue of itsmore » graph Laplacian. The influence of network topologies on the consensus time is also investigated.« less
Random Walks in a One-Dimensional Lévy Random Environment
NASA Astrophysics Data System (ADS)
Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena
2016-04-01
We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.
Stochastic simulations of a synthetic bacteria-yeast ecosystem
2012-01-01
Background The field of synthetic biology has greatly evolved and numerous functions can now be implemented by artificially engineered cells carrying the appropriate genetic information. However, in order for the cells to robustly perform complex or multiple tasks, co-operation between them may be necessary. Therefore, various synthetic biological systems whose functionality requires cell-cell communication are being designed. These systems, microbial consortia, are composed of engineered cells and exhibit a wide range of behaviors. These include yeast cells whose growth is dependent on one another, or bacteria that kill or rescue each other, synchronize, behave as predator-prey ecosystems or invade cancer cells. Results In this paper, we study a synthetic ecosystem comprising of bacteria and yeast that communicate with and benefit from each other using small diffusible molecules. We explore the behavior of this heterogeneous microbial consortium, composed of Saccharomyces cerevisiae and Escherichia coli cells, using stochastic modeling. The stochastic model captures the relevant intra-cellular and inter-cellular interactions taking place in and between the eukaryotic and prokaryotic cells. Integration of well-characterized molecular regulatory elements into these two microbes allows for communication through quorum sensing. A gene controlling growth in yeast is induced by bacteria via chemical signals and vice versa. Interesting dynamics that are common in natural ecosystems, such as obligatory and facultative mutualism, extinction, commensalism and predator-prey like dynamics are observed. We investigate and report on the conditions under which the two species can successfully communicate and rescue each other. Conclusions This study explores the various behaviors exhibited by the cohabitation of engineered yeast and bacterial cells. The way that the model is built allows for studying the dynamics of any system consisting of two species communicating with one another via chemical signals. Therefore, key information acquired by our model may potentially drive the experimental design of various synthetic heterogeneous ecosystems. PMID:22672814
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Lagrangian analysis by clustering. An example in the Nordic Seas.
NASA Astrophysics Data System (ADS)
Koszalka, Inga; Lacasce, Joseph H.
2010-05-01
We propose a new method for obtaining average velocities and eddy diffusivities from Lagrangian data. Rather than grouping the drifter-derived velocities in uniform geographical bins, as is commonly done, we group a specified number of nearest-neighbor velocities. This is done via a clustering algorithm operating on the instantaneous positions of the drifters. Thus it is the data distribution itself which determines the positions of the averages and the areal extent of the clusters. A major advantage is that because the number of members is essentially the same for all clusters, the statistical accuracy is more uniform than with geographical bins. We illustrate the technique using synthetic data from a stochastic model, employing a realistic mean flow. The latter is an accurate representation of the surface currents in the Nordic Seas and is strongly inhomogeneous in space. We use the clustering algorithm to extract the mean velocities and diffusivities (both of which are known from the stochastic model). We also compare the results to those obtained with fixed geographical bins. Clustering is more successful at capturing spatial variability of the mean flow and also improves convergence in the eddy diffusivity estimates. We discuss both the future prospects and shortcomings of the new method.
Past observable dynamics of a continuously monitored qubit
NASA Astrophysics Data System (ADS)
García-Pintos, Luis Pedro; Dressel, Justin
2017-12-01
Monitoring a quantum observable continuously in time produces a stochastic measurement record that noisily tracks the observable. For a classical process, such noise may be reduced to recover an average signal by minimizing the mean squared error between the noisy record and a smooth dynamical estimate. We show that for a monitored qubit, this usual procedure returns unusual results. While the record seems centered on the expectation value of the observable during causal generation, examining the collected past record reveals that it better approximates a moving-mean Gaussian stochastic process centered at a distinct (smoothed) observable estimate. We show that this shifted mean converges to the real part of a generalized weak value in the time-continuous limit without additional postselection. We verify that this smoothed estimate minimizes the mean squared error even for individual measurement realizations. We go on to show that if a second observable is weakly monitored concurrently, then that second record is consistent with the smoothed estimate of the second observable based solely on the information contained in the first observable record. Moreover, we show that such a smoothed estimate made from incomplete information can still outperform estimates made using full knowledge of the causal quantum state.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Robust synthetic biology design: stochastic game theory approach.
Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching
2009-07-15
Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi-Sugeno (T-S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf.
Stochastic Threshold Microdose Model for Cell Killing by Insoluble Metallic Nanomaterial Particles
Scott, Bobby R.
2010-01-01
This paper introduces a novel microdosimetric model for metallic nanomaterial-particles (MENAP)-induced cytotoxicity. The focus is on the engineered insoluble MENAP which represent a significant breakthrough in the design and development of new products for consumers, industry, and medicine. Increased production is rapidly occurring and may cause currently unrecognized health effects (e.g., nervous system dysfunction, heart disease, cancer); thus, dose-response models for MENAP-induced biological effects are needed to facilitate health risk assessment. The stochastic threshold microdose (STM) model presented introduces novel stochastic microdose metrics for use in constructing dose-response relationships for the frequency of specific cellular (e.g., cell killing, mutations, neoplastic transformation) or subcellular (e.g., mitochondria dysfunction) effects. A key metric is the exposure-time-dependent, specific burden (MENAP count) for a given critical target (e.g., mitochondria, nucleus). Exceeding a stochastic threshold specific burden triggers cell death. For critical targets in the cytoplasm, the autophagic mode of death is triggered. For the nuclear target, the apoptotic mode of death is triggered. Overall cell survival is evaluated for the indicated competing modes of death when both apply. The STM model can be applied to cytotoxicity data using Bayesian methods implemented via Markov chain Monte Carlo. PMID:21191483
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
Stochasticity of convection in Giga-LES data
NASA Astrophysics Data System (ADS)
De La Chevrotière, Michèle; Khouider, Boualem; Majda, Andrew J.
2016-09-01
The poor representation of tropical convection in general circulation models (GCMs) is believed to be responsible for much of the uncertainty in the predictions of weather and climate in the tropics. The stochastic multicloud model (SMCM) was recently developed by Khouider et al. (Commun Math Sci 8(1):187-216, 2010) to represent the missing variability in GCMs due to unresolved features of organized tropical convection. The SMCM is based on three cloud types (congestus, deep and stratiform), and transitions between these cloud types are formalized in terms of probability rules that are functions of the large-scale environment convective state and a set of seven arbitrary cloud timescale parameters. Here, a statistical inference method based on the Bayesian paradigm is applied to estimate these key cloud timescales from the Giga-LES dataset, a 24-h large-eddy simulation (LES) of deep tropical convection (Khairoutdinov et al. in J Adv Model Earth Syst 1(12), 2009) over a domain comparable to a GCM gridbox. A sequential learning strategy is used where the Giga-LES domain is partitioned into a few subdomains, and atmospheric time series obtained on each subdomain are used to train the Bayesian procedure incrementally. Convergence of the marginal posterior densities for all seven parameters is demonstrated for two different grid partitions, and sensitivity tests to other model parameters are also presented. A single column model simulation using the SMCM parameterization with the Giga-LES inferred parameters reproduces many important statistical features of the Giga-LES run, without any further tuning. In particular it exhibits intermittent dynamical behavior in both the stochastic cloud fractions and the large scale dynamics, with periods of dry phases followed by a coherent sequence of congestus, deep, and stratiform convection, varying on timescales of a few hours consistent with the Giga-LES time series. The chaotic variations of the cloud area fractions were captured fairly well both qualitatively and quantitatively demonstrating the stochastic nature of convection in the Giga-LES simulation.
Cho, Yongrae; Kim, Minsung
2014-01-01
The volatility and uncertainty in the process of technological developments are growing faster than ever due to rapid technological innovations. Such phenomena result in integration among disparate technology fields. At this point, it is a critical research issue to understand the different roles and the propensity of each element technology for technological convergence. In particular, the network-based approach provides a holistic view in terms of technological linkage structures. Furthermore, the development of new indicators based on network visualization can reveal the dynamic patterns among disparate technologies in the process of technological convergence and provide insights for future technological developments. This research attempts to analyze and discover the patterns of the international patent classification codes of the United States Patent and Trademark Office's patent data in printed electronics, which is a representative technology in the technological convergence process. To this end, we apply the physical idea as a new methodological approach to interpret technological convergence. More specifically, the concepts of entropy and gravity are applied to measure the activities among patent citations and the binding forces among heterogeneous technologies during technological convergence. By applying the entropy and gravity indexes, we could distinguish the characteristic role of each technology in printed electronics. At the technological convergence stage, each technology exhibits idiosyncratic dynamics which tend to decrease technological differences and heterogeneity. Furthermore, through nonlinear regression analysis, we have found the decreasing patterns of disparity over a given total period in the evolution of technological convergence. This research has discovered the specific role of each element technology field and has consequently identified the co-evolutionary patterns of technological convergence. These new findings on the evolutionary patterns of technological convergence provide some implications for engineering and technology foresight research, as well as for corporate strategy and technology policy.
Poly (lactic acid)-based biomaterials for orthopaedic regenerative engineering.
Narayanan, Ganesh; Vernekar, Varadraj N; Kuyinu, Emmanuel L; Laurencin, Cato T
2016-12-15
Regenerative engineering converges tissue engineering, advanced materials science, stem cell science, and developmental biology to regenerate complex tissues such as whole limbs. Regenerative engineering scaffolds provide mechanical support and nanoscale control over architecture, topography, and biochemical cues to influence cellular outcome. In this regard, poly (lactic acid) (PLA)-based biomaterials may be considered as a gold standard for many orthopaedic regenerative engineering applications because of their versatility in fabrication, biodegradability, and compatibility with biomolecules and cells. Here we discuss recent developments in PLA-based biomaterials with respect to processability and current applications in the clinical and research settings for bone, ligament, meniscus, and cartilage regeneration. Copyright © 2016 Elsevier B.V. All rights reserved.
Precision engineering: an evolutionary perspective.
Evans, Chris J
2012-08-28
Precision engineering is a relatively new name for a technology with roots going back over a thousand years; those roots span astronomy, metrology, fundamental standards, manufacturing and money-making (literally). Throughout that history, precision engineers have created links across disparate disciplines to generate innovative responses to society's needs and wants. This review combines historical and technological perspectives to illuminate precision engineering's current character and directions. It first provides us a working definition of precision engineering and then reviews the subject's roots. Examples will be given showing the contributions of the technology to society, while simultaneously showing the creative tension between the technological convergence that spurs new directions and the vertical disintegration that optimizes manufacturing economics.
1961-01-01
As presented by Gerhard Heller of Marshall Space Flight Center's Research Projects Division in 1961, this chart illustrates three basic types of electric propulsion systems then under consideration by NASA. The ion engine (top) utilized cesium atoms ionized by hot tungsten and accelerated by an electrostatic field to produce thrust. The arc engine (middle) achieved propulsion by heating a propellant with an electric arc and then producing an expansion of the hot gas or plasma in a convergent-divergent duct. The electromagnetic, or MFD engine (bottom) manipulated strong magnetic fields to interact with a plasma and produce acceleration.
Gaussian random bridges and a geometric model for information equilibrium
NASA Astrophysics Data System (ADS)
Mengütürk, Levent Ali
2018-03-01
The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.
Stochastic sensing through covalent interactions
Bayley, Hagan; Shin, Seong-Ho; Luchian, Tudor; Cheley, Stephen
2013-03-26
A system and method for stochastic sensing in which the analyte covalently bonds to the sensor element or an adaptor element. If such bonding is irreversible, the bond may be broken by a chemical reagent. The sensor element may be a protein, such as the engineered P.sub.SH type or .alpha.HL protein pore. The analyte may be any reactive analyte, including chemical weapons, environmental toxins and pharmaceuticals. The analyte covalently bonds to the sensor element to produce a detectable signal. Possible signals include change in electrical current, change in force, and change in fluorescence. Detection of the signal allows identification of the analyte and determination of its concentration in a sample solution. Multiple analytes present in the same solution may be detected.
Stochastic investigation of wind process for climatic variability identification
NASA Astrophysics Data System (ADS)
Deligiannis, Ilias; Tyrogiannis, Vassilis; Daskalou, Olympia; Dimitriadis, Panayiotis; Markonis, Yannis; Iliopoulou, Theano; Koutsoyiannis, Demetris
2016-04-01
The wind process is considered one of the hydrometeorological processes that generates and drives the climate dynamics. We use a dataset comprising hourly wind records to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale) for various time periods. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Stochastic investigation of precipitation process for climatic variability identification
NASA Astrophysics Data System (ADS)
Sotiriadou, Alexia; Petsiou, Amalia; Feloni, Elisavet; Kastis, Paris; Iliopoulou, Theano; Markonis, Yannis; Tyralis, Hristos; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris
2016-04-01
The precipitation process is important not only to hydrometeorology but also to renewable energy resources management. We use a dataset consisting of daily and hourly records around the globe to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale). Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Control mechanisms for stochastic biochemical systems via computation of reachable sets.
Lakatos, Eszter; Stumpf, Michael P H
2017-08-01
Controlling the behaviour of cells by rationally guiding molecular processes is an overarching aim of much of synthetic biology. Molecular processes, however, are notoriously noisy and frequently nonlinear. We present an approach to studying the impact of control measures on motifs of molecular interactions that addresses the problems faced in many biological systems: stochasticity, parameter uncertainty and nonlinearity. We show that our reachability analysis formalism can describe the potential behaviour of biological (naturally evolved as well as engineered) systems, and provides a set of bounds on their dynamics at the level of population statistics: for example, we can obtain the possible ranges of means and variances of mRNA and protein expression levels, even in the presence of uncertainty about model parameters.
Control mechanisms for stochastic biochemical systems via computation of reachable sets
Lakatos, Eszter
2017-01-01
Controlling the behaviour of cells by rationally guiding molecular processes is an overarching aim of much of synthetic biology. Molecular processes, however, are notoriously noisy and frequently nonlinear. We present an approach to studying the impact of control measures on motifs of molecular interactions that addresses the problems faced in many biological systems: stochasticity, parameter uncertainty and nonlinearity. We show that our reachability analysis formalism can describe the potential behaviour of biological (naturally evolved as well as engineered) systems, and provides a set of bounds on their dynamics at the level of population statistics: for example, we can obtain the possible ranges of means and variances of mRNA and protein expression levels, even in the presence of uncertainty about model parameters. PMID:28878957
Lai, Zhi-Hui; Leng, Yong-Gang
2015-01-01
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671
Hruszkewycz, Stephan O; Holt, Martin V; Tripathi, Ash; Maser, Jörg; Fuoss, Paul H
2011-06-15
We present the framework for convergent beam Bragg ptychography, and, using simulations, we demonstrate that nanocrystals can be ptychographically reconstructed from highly convergent x-ray Bragg diffraction. The ptychographic iterative engine is extended to three dimensions and shown to successfully reconstruct a simulated nanocrystal using overlapping raster scans with a defocused curved beam, the diameter of which matches the crystal size. This object reconstruction strategy can serve as the basis for coherent diffraction imaging experiments at coherent scanning nanoprobe x-ray sources.
1983-12-01
applies not his reason, but his memory....No human investigation can call Itself true science, unless it comes through mathematical demonstration...between laser beam lines and mirror lines • • 18 3.2 Relationship between virtual image and object image for reflection at a plane surface...Results show that for equal indices of refraction inside and out- side the tunnel, the laser beams of a converging pair do not totally converge with its
Biomaterials for Bone Regenerative Engineering.
Yu, Xiaohua; Tang, Xiaoyan; Gohil, Shalini V; Laurencin, Cato T
2015-06-24
Strategies for bone tissue regeneration have been continuously evolving for the last 25 years since the introduction of the "tissue engineering" concept. The convergence of the life, physical, and engineering sciences has brought in several advanced technologies available to tissue engineers and scientists. This resulted in the creation of a new multidisciplinary field termed as "regenerative engineering". In this article, the role of biomaterials in bone regenerative engineering is systematically reviewed to elucidate the new design criteria for the next generation of biomaterials for bone regenerative engineering. The exemplary design of biomaterials harnessing various materials characteristics towards successful bone defect repair and regeneration is highlighted. Particular attention is given to the attempts of incorporating advanced materials science, stem cell technologies, and developmental biology into biomaterials design to engineer and develop the next generation bone grafts. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...
2015-02-18
Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less
Tseng, Zhijie Jack
2013-01-01
Morphological convergence is a well documented phenomenon in mammals, and adaptive explanations are commonly employed to infer similar functions for convergent characteristics. I present a study that adopts aspects of theoretical morphology and engineering optimization to test hypotheses about adaptive convergent evolution. Bone-cracking ecomorphologies in Carnivora were used as a case study. Previous research has shown that skull deepening and widening are major evolutionary patterns in convergent bone-cracking canids and hyaenids. A simple two-dimensional design space, with skull width-to-length and depth-to-length ratios as variables, was used to examine optimized shapes for two functional properties: mechanical advantage (MA) and strain energy (SE). Functionality of theoretical skull shapes was studied using finite element analysis (FEA) and visualized as functional landscapes. The distribution of actual skull shapes in the landscape showed a convergent trend of plesiomorphically low-MA and moderate-SE skulls evolving towards higher-MA and moderate-SE skulls; this is corroborated by FEA of 13 actual specimens. Nevertheless, regions exist in the landscape where high-MA and lower-SE shapes are not represented by existing species; their vacancy is observed even at higher taxonomic levels. Results highlight the interaction of biomechanical and non-biomechanical factors in constraining general skull dimensions to localized functional optima through evolution. PMID:23734244
Practical application of noise diffusion in U-70 synchrotron
NASA Astrophysics Data System (ADS)
Ivanov, S. V.; Lebedev, O. P.
2016-12-01
This paper briefly outlines the physical substantiation and the engineering implementation of technological systems in the U-70 synchrotron based on controllable noise diffusion of the beam. They include two systems of stochastic slow beam extraction (for high and intermediate energy) and the system of longitudinal noise RF gymnastics designated for flattening the bunch distribution over the azimuth.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
Compartmental and Spatial Rule-Based Modeling with Virtual Cell.
Blinov, Michael L; Schaff, James C; Vasilescu, Dan; Moraru, Ion I; Bloom, Judy E; Loew, Leslie M
2017-10-03
In rule-based modeling, molecular interactions are systematically specified in the form of reaction rules that serve as generators of reactions. This provides a way to account for all the potential molecular complexes and interactions among multivalent or multistate molecules. Recently, we introduced rule-based modeling into the Virtual Cell (VCell) modeling framework, permitting graphical specification of rules and merger of networks generated automatically (using the BioNetGen modeling engine) with hand-specified reaction networks. VCell provides a number of ordinary differential equation and stochastic numerical solvers for single-compartment simulations of the kinetic systems derived from these networks, and agent-based network-free simulation of the rules. In this work, compartmental and spatial modeling of rule-based models has been implemented within VCell. To enable rule-based deterministic and stochastic spatial simulations and network-free agent-based compartmental simulations, the BioNetGen and NFSim engines were each modified to support compartments. In the new rule-based formalism, every reactant and product pattern and every reaction rule are assigned locations. We also introduce the rule-based concept of molecular anchors. This assures that any species that has a molecule anchored to a predefined compartment will remain in this compartment. Importantly, in addition to formulation of compartmental models, this now permits VCell users to seamlessly connect reaction networks derived from rules to explicit geometries to automatically generate a system of reaction-diffusion equations. These may then be simulated using either the VCell partial differential equations deterministic solvers or the Smoldyn stochastic simulator. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Russian engineering education in the era of change
NASA Astrophysics Data System (ADS)
Vladimirovich Pukharenko, Yurii; Vladimirovna Norina, Natalia; Aleksandrovich Norin, Veniamin
2017-03-01
The article investigates modern issues of engineering education in Russia related to introduction of the Bologna system. The author shows that the situation in the education in general gives reasons for concern; the issue of qualitative enrolment of students for engineering specialties escalates; graduates with masters and bachelors' degrees are not in demand in industries or agriculture due to poor training for work in the real life. The main cause of problems in the engineering personnel training in Russia (lacking effective relationship with employers and universities) is discussed. The ways to overcome such issues in quality engineering training were investigated. The author has considered new requirements to engineering education and have briefly compare the Russian model of engineering education with the European and American models. The prospects of the Russian engineering education (transiting to the sixth technological mode) and issues of NBIC-convergent engineering education have been examined.
Masters, Rich; Capio, Catherine; Poolton, Jamie; Uiga, Liis
2018-06-01
Re-engineering the built environment to influence behaviors associated with physical activity potentially provides an opportunity to promote healthier lifestyles at a population level. Here we present evidence from two quasi-experimental field studies in which we tested a novel, yet deceptively simple, intervention designed to alter perception of, and walking behavior associated with, stairs in an urban area. Our objectives were to examine whether adjusting a stair banister has an influence on perceptions of stair steepness or on walking behavior when approaching the stairs. In study 1, we asked participants (n = 143) to visually estimate the steepness of a set of stairs viewed from the top, when the stair banister was adjusted so that it converged with or diverged from the stairs (± 1.91°) or remained neutral (± 0°). In study 2, the walking behavior of participants (n = 36) was filmed as they approached the stairs to descend, unaware of whether the banister converged, diverged, or was neutral. In study 1, participants estimated the stairs to be steeper if the banister diverged from, rather than converged with, the stairs. The effect was greater when participants were unaware of the adjustment. In study 2, walking speed was significantly slower when the banister diverged from, rather than converged with, the stairs. These findings encourage us to speculate about the potential to economically re-engineer features of the built environment to provide opportunities for action (affordances) that invite physical activity behavior or even promote safer navigation of the environment.
A convergent model for distributed processing of Big Sensor Data in urban engineering networks
NASA Astrophysics Data System (ADS)
Parygin, D. S.; Finogeev, A. G.; Kamaev, V. A.; Finogeev, A. A.; Gnedkova, E. P.; Tyukov, A. P.
2017-01-01
The problems of development and research of a convergent model of the grid, cloud, fog and mobile computing for analytical Big Sensor Data processing are reviewed. The model is meant to create monitoring systems of spatially distributed objects of urban engineering networks and processes. The proposed approach is the convergence model of the distributed data processing organization. The fog computing model is used for the processing and aggregation of sensor data at the network nodes and/or industrial controllers. The program agents are loaded to perform computing tasks for the primary processing and data aggregation. The grid and the cloud computing models are used for integral indicators mining and accumulating. A computing cluster has a three-tier architecture, which includes the main server at the first level, a cluster of SCADA system servers at the second level, a lot of GPU video cards with the support for the Compute Unified Device Architecture at the third level. The mobile computing model is applied to visualize the results of intellectual analysis with the elements of augmented reality and geo-information technologies. The integrated indicators are transferred to the data center for accumulation in a multidimensional storage for the purpose of data mining and knowledge gaining.
multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows
NASA Astrophysics Data System (ADS)
Turnquist, Brian; Owkes, Mark
2017-11-01
Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.
NASA Astrophysics Data System (ADS)
Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying
2016-05-01
Health state estimation of inaccessible components in complex systems necessitates effective state estimation techniques using the observable variables of the system. The task becomes much complicated when the system is nonlinear/non-Gaussian and it receives stochastic input. In this work, a novel sequential state estimation framework is developed based on particle filtering (PF) scheme for state estimation of general class of nonlinear dynamical systems with stochastic input. Performance of the developed framework is then validated with simulation on a Bivariate Non-stationary Growth Model (BNGM) as a benchmark. In the next step, three-year operating data of an industrial gas turbine engine (GTE) are utilized to verify the effectiveness of the developed framework. A comprehensive thermodynamic model for the GTE is therefore developed to formulate the relation of the observable parameters and the dominant degradation symptoms of the turbine, namely, loss of isentropic efficiency and increase of the mass flow. The results confirm the effectiveness of the developed framework for simultaneous estimation of multiple degradation symptoms in complex systems with noisy measured inputs.
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering. PMID:27872840
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
NASA Astrophysics Data System (ADS)
Li, Yong-Fu; Xiao-Pei, Kou; Zheng, Tai-Xiong; Li, Yin-Guo
2015-05-01
In transportation cyber-physical-systems (T-CPS), vehicle-to-vehicle (V2V) communications play an important role in the coordination between individual vehicles as well as between vehicles and the roadside infrastructures, and engine cylinder pressure is significant for engine diagnosis on-line and torque control within the information exchange process under V2V communications. However, the parametric uncertainties caused from measurement noise in T-CPS lead to the dynamic performance deterioration of the engine cylinder pressure estimation. Considering the high accuracy requirement under V2V communications, a high gain observer based on the engine dynamic model is designed to improve the accuracy of pressure estimation. Then, the analyses about convergence, converge speed and stability of the corresponding error model are conducted using the Laplace and Lyapunov method. Finally, results from combination of Simulink with GT-Power based numerical experiments and comparisons demonstrate the effectiveness of the proposed approach with respect to robustness and accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 61304197), the Scientific and Technological Talents of Chongqing, China (Grant No. cstc2014kjrc-qnrc30002), the Key Project of Application and Development of Chongqing, China (Grant No. cstc2014yykfB40001), the Natural Science Funds of Chongqing, China (Grant No. cstc2014jcyjA60003), and the Doctoral Start-up Funds of Chongqing University of Posts and Telecommunications, China (Grant No. A2012-26).
Moore, Amanda M; Dameron, Arrelaine A; Mantooth, Brent A; Smith, Rachel K; Fuchs, Daniel J; Ciszek, Jacob W; Maya, Francisco; Yao, Yuxing; Tour, James M; Weiss, Paul S
2006-02-15
Six customized phenylene-ethynylene-based oligomers have been studied for their electronic properties using scanning tunneling microscopy to test hypothesized mechanisms of stochastic conductance switching. Previously suggested mechanisms include functional group reduction, functional group rotation, backbone ring rotation, neighboring molecule interactions, bond fluctuations, and hybridization changes. Here, we test these hypotheses experimentally by varying the molecular designs of the switches; the ability of the molecules to switch via each hypothetical mechanism is selectively engineered into or out of each molecule. We conclude that hybridization changes at the molecule-surface interface are responsible for the switching we observe.
Multiple Scattering in Random Mechanical Systems and Diffusion Approximation
NASA Astrophysics Data System (ADS)
Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun
2013-10-01
This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.
Hao, Nan; Palmer, Adam C.; Ahlgren-Berg, Alexandra; Shearwin, Keith E.; Dodd, Ian B.
2016-01-01
Transcriptional interference (TI), where transcription from a promoter is inhibited by the activity of other promoters in its vicinity on the same DNA, enables transcription factors to regulate a target promoter indirectly, inducing or relieving TI by controlling the interfering promoter. For convergent promoters, stochastic simulations indicate that relief of TI can be inhibited if the repressor at the interfering promoter has slow binding kinetics, making it either sensitive to frequent dislodgement by elongating RNA polymerases (RNAPs) from the target promoter, or able to be a strong roadblock to these RNAPs. In vivo measurements of relief of TI by CI or Cro repressors in the bacteriophage λ PR–PRE system show strong relief of TI and a lack of dislodgement and roadblocking effects, indicative of rapid CI and Cro binding kinetics. However, repression of the same λ promoter by a catalytically dead CRISPR Cas9 protein gave either compromised or no relief of TI depending on the orientation at which it binds DNA, consistent with dCas9 being a slow kinetics repressor. This analysis shows how the intrinsic properties of a repressor can be evolutionarily tuned to set the magnitude of relief of TI. PMID:27378773
Numerical Solution of Dyson Brownian Motion and a Sampling Scheme for Invariant Matrix Ensembles
NASA Astrophysics Data System (ADS)
Li, Xingjie Helen; Menon, Govind
2013-12-01
The Dyson Brownian Motion (DBM) describes the stochastic evolution of N points on the line driven by an applied potential, a Coulombic repulsion and identical, independent Brownian forcing at each point. We use an explicit tamed Euler scheme to numerically solve the Dyson Brownian motion and sample the equilibrium measure for non-quadratic potentials. The Coulomb repulsion is too singular for the SDE to satisfy the hypotheses of rigorous convergence proofs for tamed Euler schemes (Hutzenthaler et al. in Ann. Appl. Probab. 22(4):1611-1641, 2012). Nevertheless, in practice the scheme is observed to be stable for time steps of O(1/ N 2) and to relax exponentially fast to the equilibrium measure with a rate constant of O(1) independent of N. Further, this convergence rate appears to improve with N in accordance with O(1/ N) relaxation of local statistics of the Dyson Brownian motion. This allows us to use the Dyson Brownian motion to sample N× N Hermitian matrices from the invariant ensembles. The computational cost of generating M independent samples is O( MN 4) with a naive scheme, and O( MN 3log N) when a fast multipole method is used to evaluate the Coulomb interaction.
NASA Astrophysics Data System (ADS)
Cao, Jingtai; Zhao, Xiaohui; Li, Zhaokun; Liu, Wei; Gu, Haijun
2017-11-01
The performance of free space optical (FSO) communication system is limited by atmospheric turbulent extremely. Adaptive optics (AO) is the significant method to overcome the atmosphere disturbance. Especially, for the strong scintillation effect, the sensor-less AO system plays a major role for compensation. In this paper, a modified artificial fish school (MAFS) algorithm is proposed to compensate the aberrations in the sensor-less AO system. Both the static and dynamic aberrations compensations are analyzed and the performance of FSO communication before and after aberrations compensations is compared. In addition, MAFS algorithm is compared with artificial fish school (AFS) algorithm, stochastic parallel gradient descent (SPGD) algorithm and simulated annealing (SA) algorithm. It is shown that the MAFS algorithm has a higher convergence speed than SPGD algorithm and SA algorithm, and reaches the better convergence value than AFS algorithm, SPGD algorithm and SA algorithm. The sensor-less AO system with MAFS algorithm effectively increases the coupling efficiency at the receiving terminal with fewer numbers of iterations. In conclusion, the MAFS algorithm has great significance for sensor-less AO system to compensate atmospheric turbulence in FSO communication system.
On decoupling of volatility smile and term structure in inverse option pricing
NASA Astrophysics Data System (ADS)
Egger, Herbert; Hein, Torsten; Hofmann, Bernd
2006-08-01
Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.
Doll, J.; Dupuis, P.; Nyquist, P.
2017-02-08
Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, S.; Merkel, P.; Monticello, D.A.
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz {ital et al.}, {ital Plasma Physics and Controlled Nuclear Fusion Research 1990} (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman {ital et al.}, Comput. Phys. Commun., {bold 43}, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations neededmore » for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann {ital et al.}, Phys. Fluids {bold 26}, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of {open_quotes}self-healing{close_quotes} of islands has been observed. {copyright} {ital 1999 American Institute of Physics.}« less
NASA Astrophysics Data System (ADS)
Kuroda, Koji; Maskawa, Jun-ichi; Murai, Joshin
2013-08-01
Empirical studies of the high frequency data in stock markets show that the time series of trade signs or signed volumes has a long memory property. In this paper, we present a discrete time stochastic process for polymer model which describes trader's trading strategy, and show that a scale limit of the process converges to superposition of fractional Brownian motions with Hurst exponents and Brownian motion, provided that the index γ of the time scale about the trader's investment strategy coincides with the index δ of the interaction range in the discrete time process. The main tool for the investigation is the method of cluster expansion developed in the mathematical study of statistical mechanics.
Incoherent beam combining based on the momentum SPGD algorithm
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng
2018-05-01
Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.
Inglis, Stephen; Melko, Roger G
2013-01-01
We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.
PROBING HUMAN AND MONKEY ANTERIOR CINGULATE CORTEX IN VARIABLE ENVIRONMENTS
Walton, Mark E.; Mars, Rogier B.
2008-01-01
Previous research has identified the anterior cingulate cortex (ACC) as an important node in the neural network underlying decision making in primates. Decision making can, however, be studied under large variety of circumstances, ranging from the standard well-controlled lab situation to more natural, stochastic settings during which multiple agents interact. Here, we illustrate how these different varieties of decision making studied can influence theories of ACC function in monkeys. Converging evidence from unit recordings and lesions studies now suggest that the ACC is important for interpreting outcome information according to the current task context to guide future action selection. We then apply this framework to the study of human ACC function and discuss its potential implications. PMID:18189014
NASA Astrophysics Data System (ADS)
Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.
2017-11-01
This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
Preliminary noise tests of the engine-over-the-wing concept. i: 30 deg - 60 deg flap position
NASA Technical Reports Server (NTRS)
Reshotko, M.; Olsen, W. A.; Dorsch, R. G.
1972-01-01
The results of preliminary acoustic tests of the engine over the wing concept are summarized. The tests were conducted with a small wing section model (32 cm chord) having two flaps set at the landing position, which is 30 and 60 deg respectively. The engine exhaust was simulated by an air jet from a convergent nozzle having a nominal diameter of 5.1 centimeters. Factors investigated for their effect on noise include nozzle location, wing shielding, flap leakage, nozzle shape, exhaust deflectors, and internally generated exhaust noise.
The Quest toward limb regeneration: a regenerative engineering approach
Laurencin, Cato T.; Nair, Lakshmi S.
2016-01-01
The Holy Grail to address the clinical grand challenge of human limb loss is to develop innovative strategies to regrow the amputated limb. The remarkable advances in the scientific understanding of regeneration, stem cell science, material science and engineering, physics and novel surgical approaches in the past few decades have provided a regenerative tool box to face this grand challenge and address the limitations of human wound healing. Here we discuss the convergence approach put forward by the field of Regenerative Engineering to use the regenerative tool box to design and develop novel translational strategies to limb regeneration. PMID:27047679
Allocating resources to large wildland fires: a model with stochastic production rates
Romain Mees; David Strauss
1992-01-01
Wildland fires that grow out of the initial attack phase are responsible for most of the damage and burned area. We model the allocation of fire suppression resources (ground crews, engines, bulldozers, and airdrops) to these large fires. The fireline at a given future time is partitioned into homogeneous segments on the basis of fuel type, available resources, risk,...
Expectation Maximization and its Application in Modeling, Segmentation and Anomaly Detection
2008-05-01
ocomplNc <la!a rrot>lcm,. ",., i’lCOll\\l>lc,c,ICSS of Ihc dala mayan "" IIuc lu missing dala. (J,,,,,,.,ed di,nibu!ions . elc . 0"" such c • ..- is a...Estimation Techniques in Computer Huiyan, Z., Yongfeng, C., Wen, Y. SAR Image Segmentation Using MPM Constrained Stochastic Relaxation. Civil Engineering
CAEBAT Model Featured on American Chemical Society Journal Tenth
University's School of Mechanical Engineering has yielded new insights for lithium-ion (Li-ion) battery corresponding article, "Secondary-Phase Stochastics in Lithium-Ion Battery Electrodes" detailing the microstructural modifications can greatly improve overall Li-ion battery performance. The value of this work is
Transportation Research News | Transportation News | Transportation
Engineering has yielded new insights for lithium-ion (Li-ion) battery electrodes at the microstructural level -Phase Stochastics in Lithium-Ion Battery Electrodes" detailing the research and resulting revolutionizes the way lithium-ion (Li-ion) batteries are evaluated so designs can be improved before batteries
Energy Storage News | Transportation | Transportation Research | NREL
NREL/Purdue team's corresponding article, "Secondary-Phase Stochastics in Lithium-Ion Battery by NREL and NASA, the Battery ISC Device revolutionizes the way lithium-ion (Li-ion) batteries are collaboration with Purdue University's School of Mechanical Engineering has yielded new insights for lithium-ion
SSC San Diego Biennial Review 2003. Vol 2: Communication and Information Systems
2003-01-01
University, Department of Electrical and Computer Engineering) Michael Jablecki (Science and Technology Corporation) Stochastic Unified Multiple...wearable computers and cellular phones. The technology-transfer process involved a coalition of government and industrial partners, each providing...the design and fabrication of the coupler. SSC San Diego developed a computer -controlled fused fiber fabrication station to achieve the required
Chen, Yuhang; Zhou, Shiwei; Li, Qing
2011-03-01
The degradation of polymeric biomaterials, which are widely exploited in tissue engineering and drug delivery systems, has drawn significant attention in recent years. This paper aims to develop a mathematical model that combines stochastic hydrolysis and mass transport to simulate the polymeric degradation and erosion process. The hydrolysis reaction is modeled in a discrete fashion by a fundamental stochastic process and an additional autocatalytic effect induced by the local carboxylic acid concentration in terms of the continuous diffusion equation. Illustrative examples of microparticles and tissue scaffolds demonstrate the applicability of the model. It is found that diffusive transport plays a critical role in determining the degradation pathway, whilst autocatalysis makes the degradation size dependent. The modeling results show good agreement with experimental data in the literature, in which the hydrolysis rate, polymer architecture and matrix size actually work together to determine the characteristics of the degradation and erosion processes of bulk-erosive polymer devices. The proposed degradation model exhibits great potential for the design optimization of drug carriers and tissue scaffolds. Copyright © 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Yu, E-mail: yu.pan@anu.edu.au, E-mail: zibo.miao@anu.edu.au; Miao, Zibo, E-mail: yu.pan@anu.edu.au, E-mail: zibo.miao@anu.edu.au; Amini, Hadis, E-mail: nhamini@stanford.edu
Quantum Markovian systems, modeled as unitary dilations in the quantum stochastic calculus of Hudson and Parthasarathy, have become standard in current quantum technological applications. This paper investigates the stability theory of such systems. Lyapunov-type conditions in the Heisenberg picture are derived in order to stabilize the evolution of system operators as well as the underlying dynamics of the quantum states. In particular, using the quantum Markov semigroup associated with this quantum stochastic differential equation, we derive sufficient conditions for the existence and stability of a unique and faithful invariant quantum state. Furthermore, this paper proves the quantum invariance principle, whichmore » extends the LaSalle invariance principle to quantum systems in the Heisenberg picture. These results are formulated in terms of algebraic constraints suitable for engineering quantum systems that are used in coherent feedback networks.« less
Stochastic investigation of temperature process for climatic variability identification
NASA Astrophysics Data System (ADS)
Lerias, Eleutherios; Kalamioti, Anna; Dimitriadis, Panayiotis; Markonis, Yannis; Iliopoulou, Theano; Koutsoyiannis, Demetris
2016-04-01
The temperature process is considered as the most characteristic hydrometeorological process and has been thoroughly examined in the climate-change framework. We use a dataset comprising hourly temperature and dew point records to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale) for various time periods. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Supervised self-organization of homogeneous swarms using ergodic projections of Markov chains.
Chattopadhyay, Ishanu; Ray, Asok
2009-12-01
This paper formulates a self-organization algorithm to address the problem of global behavior supervision in engineered swarms of arbitrarily large population sizes. The swarms considered in this paper are assumed to be homogeneous collections of independent identical finite-state agents, each of which is modeled by an irreducible finite Markov chain. The proposed algorithm computes the necessary perturbations in the local agents' behavior, which guarantees convergence to the desired observed state of the swarm. The ergodicity property of the swarm, which is induced as a result of the irreducibility of the agent models, implies that while the local behavior of the agents converges to the desired behavior only in the time average, the overall swarm behavior converges to the specification and stays there at all times. A simulation example illustrates the underlying concept.
Internal-Film Cooling of Rocket Nozzles
NASA Technical Reports Server (NTRS)
Sloop, J L; Kinney, George R
1948-01-01
Experiments were conducted with 1000-pound-thrust rocket engine to determine feasibility of cooling convergent-divergent nozzle by internal film of water introduced at nozzle entrance. Water flow of 3 percent of propellant flow reduced heat flow into nozzle to 55 percent of uncooled heat flow. Introduction of water by porous ring before nozzle resulted in more uniform coverage of nozzle than water introduced by single arrangement of 36 jets directed along nozzle wall. Water flow through porous ring of 3.5 percent of propellant flow stabilized wall temperature in convergent section but did not adequately cool throat or divergent sections.
Spatially Controlled Relay Beamforming
NASA Astrophysics Data System (ADS)
Kalogerias, Dionysios
This thesis is about fusion of optimal stochastic motion control and physical layer communications. Distributed, networked communication systems, such as relay beamforming networks (e.g., Amplify & Forward (AF)), are typically designed without explicitly considering how the positions of the respective nodes might affect the quality of the communication. Optimum placement of network nodes, which could potentially improve the quality of the communication, is not typically considered. However, in most practical settings in physical layer communications, such as relay beamforming, the Channel State Information (CSI) observed by each node, per channel use, although it might be (modeled as) random, it is both spatially and temporally correlated. It is, therefore, reasonable to ask if and how the performance of the system could be improved by (predictively) controlling the positions of the network nodes (e.g., the relays), based on causal side (CSI) information, and exploitting the spatiotemporal dependencies of the wireless medium. In this work, we address this problem in the context of AF relay beamforming networks. This novel, cyber-physical system approach to relay beamforming is termed as "Spatially Controlled Relay Beamforming". First, we discuss wireless channel modeling, however, in a rigorous, Bayesian framework. Experimentally accurate and, at the same time, technically precise channel modeling is absolutely essential for designing and analyzing spatially controlled communication systems. In this work, we are interested in two distinct spatiotemporal statistical models, for describing the behavior of the log-scale magnitude of the wireless channel: 1. Stationary Gaussian Fields: In this case, the channel is assumed to evolve as a stationary, Gaussian stochastic field in continuous space and discrete time (say, for instance, time slots). Under such assumptions, spatial and temporal statistical interactions are determined by a set of time and space invariant parameters, which completely determine the mean and covariance of the underlying Gaussian measure. This model is relatively simple to describe, and can be sufficiently characterized, at least for our purposes, both statistically and topologically. Additionally, the model is rather versatile and there is existing experimental evidence, supporting its practical applicability. Our contributions are summarized in properly formulating the whole spatiotemporal model in a completely rigorous mathematical setting, under a convenient measure theoretic framework. Such framework greatly facilitates formulation of meaningful stochastic control problems, where the wireless channel field (or a function of it) can be regarded as a stochastic optimization surface.. 2. Conditionally Gaussian Fields, when conditioned on a Markovian channel state: This is a completely novel approach to wireless channel modeling. In this approach, the communication medium is assumed to behave as a partially observable (or hidden) system, where a hidden, global, temporally varying underlying stochastic process, called the channel state, affects the spatial interactions of the actual channel magnitude, evaluated at any set of locations in the plane. More specifically, we assume that, conditioned on the channel state, the wireless channel constitutes an observable, conditionally Gaussian stochastic process. The channel state evolves in time according to a known, possibly non stationary, non Gaussian, low dimensional Markov kernel. Recognizing the intractability of general nonlinear state estimation, we advocate the use of grid based approximate nonlinear filters as an effective and robust means for recursive tracking of the channel state. We also propose a sequential spatiotemporal predictor for tracking the channel gains at any point in time and space, providing real time sequential estimates for the respective channel gain map. In this context, our contributions are multifold. Except for the introduction of the layered channel model previously described, this line of research has resulted in a number of general, asymptotic convergence results, advancing the theory of grid-based approximate nonlinear stochastic filtering. In particular, sufficient conditions, ensuring asymptotic optimality are relaxed, and, at the same time, the mode of convergence is strengthened. Although the need for such results initiated as an attempt to theoretically characterize the performance of the proposed approximate methods for statistical inference, in regard to the proposed channel modeling approach, they turn out to be of fundamental importance in the areas of nonlinear estimation and stochastic control. The experimental validation of the proposed channel model, as well as the related parameter estimation problem, termed as "Markovian Channel Profiling (MCP)", fundamentally important for any practical deployment, are subject of current, ongoing research. Second, adopting the first of the two aforementioned channel modeling approaches, we consider the spatially controlled relay beamforming problem for an AF network with a single source, a single destination, and multiple, controlled at will, relay nodes. (Abstract shortened by ProQuest.).
Vinton Cerf: Poet-Philosopher of the Net.
ERIC Educational Resources Information Center
Educom Review, 1996
1996-01-01
Presents the first part of an interview with Vinton Cerf, senior vice president of data architecture for MCI Engineering, on the growth and future of the Internet. Topics include: pornography; commercialization; security; government role; content found on the Internet; and convergence of technologies. (DGM)
Cho, Yongrae; Kim, Minsung
2014-01-01
The volatility and uncertainty in the process of technological developments are growing faster than ever due to rapid technological innovations. Such phenomena result in integration among disparate technology fields. At this point, it is a critical research issue to understand the different roles and the propensity of each element technology for technological convergence. In particular, the network-based approach provides a holistic view in terms of technological linkage structures. Furthermore, the development of new indicators based on network visualization can reveal the dynamic patterns among disparate technologies in the process of technological convergence and provide insights for future technological developments. This research attempts to analyze and discover the patterns of the international patent classification codes of the United States Patent and Trademark Office's patent data in printed electronics, which is a representative technology in the technological convergence process. To this end, we apply the physical idea as a new methodological approach to interpret technological convergence. More specifically, the concepts of entropy and gravity are applied to measure the activities among patent citations and the binding forces among heterogeneous technologies during technological convergence. By applying the entropy and gravity indexes, we could distinguish the characteristic role of each technology in printed electronics. At the technological convergence stage, each technology exhibits idiosyncratic dynamics which tend to decrease technological differences and heterogeneity. Furthermore, through nonlinear regression analysis, we have found the decreasing patterns of disparity over a given total period in the evolution of technological convergence. This research has discovered the specific role of each element technology field and has consequently identified the co-evolutionary patterns of technological convergence. These new findings on the evolutionary patterns of technological convergence provide some implications for engineering and technology foresight research, as well as for corporate strategy and technology policy. PMID:24914959
NASA Astrophysics Data System (ADS)
Papoulakos, Konstantinos; Pollakis, Giorgos; Moustakis, Yiannis; Markopoulos, Apostolis; Iliopoulou, Theano; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris; Efstratiadis, Andreas
2017-04-01
Small islands are regarded as promising areas for developing hybrid water-energy systems that combine multiple sources of renewable energy with pumped-storage facilities. Essential element of such systems is the water storage component (reservoir), which implements both flow and energy regulations. Apparently, the representation of the overall water-energy management problem requires the simulation of the operation of the reservoir system, which in turn requires a faithful estimation of water inflows and demands of water and energy. Yet, in small-scale reservoir systems, this task in far from straightforward, since both the availability and accuracy of associated information is generally very poor. For, in contrast to large-scale reservoir systems, for which it is quite easy to find systematic and reliable hydrological data, in the case of small systems such data may be minor or even totally missing. The stochastic approach is the unique means to account for input data uncertainties within the combined water-energy management problem. Using as example the Livadi reservoir, which is the pumped storage component of the small Aegean island of Astypalaia, Greece, we provide a simulation framework, comprising: (a) a stochastic model for generating synthetic rainfall and temperature time series; (b) a stochastic rainfall-runoff model, whose parameters cannot be inferred through calibration and, thus, they are represented as correlated random variables; (c) a stochastic model for estimating water supply and irrigation demands, based on simulated temperature and soil moisture, and (d) a daily operation model of the reservoir system, providing stochastic forecasts of water and energy outflows. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Multiscale Hy3S: hybrid stochastic simulation for supercomputers.
Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2006-02-24
Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.
Aesthetics and ethics in engineering: insights from Polanyi.
Dias, Priyan
2011-06-01
Polanyi insisted that scientific knowledge was intensely personal in nature, though held with universal intent. His insights regarding the personal values of beauty and morality in science are first enunciated. These are then explored for their relevance to engineering. It is shown that the practice of engineering is also governed by aesthetics and ethics. For example, Polanyi's three spheres of morality in science--that of the individual scientist, the scientific community and the wider society--has parallel entities in engineering. The existence of shared values in engineering is also demonstrated, in aesthetics through an example that shows convergence of practitioner opinion to solutions that represent accepted models of aesthetics; and in ethics through the recognition that many professional engineering institutions hold that the safety of the public supersedes the interests of the client. Such professional consensus can be seen as justification for studying engineering aesthetics and ethics as inter-subjective disciplines.
A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.
2016-12-01
It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.
Khammash, Mustafa
2014-01-01
Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed. PMID:24968191
A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators
2016-01-01
The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539
Fox, Laurel R
2007-12-01
Species with known demographies may be used as proxies, or approximate models, to predict vital rates and ecological properties of target species that either have not been studied or are species for which data may be difficult to obtain. These extrapolations assume that model and target species with similar properties respond in the same ways to the same ecological factors, that they have similar population dynamics, and that the similarity of vital rates reflects analogous responses to the same factors. I used two rare, sympatric annual plants (sand gilia [Gilia tenuiflora arenaria] and Monterey spineflower [Chorizanthe pungens pungens]) to test these assumptions experimentally. The vital rates of these species are similar and strongly correlated with rainfall, and I added water and/or prevented herbivore access to experimental plots. Their survival and reproduction were driven by different, largely stochastic factors and processes: sand gilia by herbivory and Monterey spineflower by rainfall. Because the causal agents and processes generating similar demographic patterns were species specific, these results demonstrate, both theoretically and empirically, that it is critical to identify the ecological processes generating observed effects and that experimental manipulations are usually needed to determine causal mechanisms. Without such evidence to identify mechanisms, extrapolations among species may lead to counterproductive management and conservation practices.
NASA Astrophysics Data System (ADS)
Moix, Jeremy M.; Cao, Jianshu
2013-10-01
The hierarchical equations of motion technique has found widespread success as a tool to generate the numerically exact dynamics of non-Markovian open quantum systems. However, its application to low temperature environments remains a serious challenge due to the need for a deep hierarchy that arises from the Matsubara expansion of the bath correlation function. Here we present a hybrid stochastic hierarchical equation of motion (sHEOM) approach that alleviates this bottleneck and leads to a numerical cost that is nearly independent of temperature. Additionally, the sHEOM method generally converges with fewer hierarchy tiers allowing for the treatment of larger systems. Benchmark calculations are presented on the dynamics of two level systems at both high and low temperatures to demonstrate the efficacy of the approach. Then the hybrid method is used to generate the exact dynamics of systems that are nearly impossible to treat by the standard hierarchy. First, exact energy transfer rates are calculated across a broad range of temperatures revealing the deviations from the Förster rates. This is followed by computations of the entanglement dynamics in a system of two qubits at low temperature spanning the weak to strong system-bath coupling regimes.