Stochastic optimization algorithms for barrier dividend strategies
NASA Astrophysics Data System (ADS)
Yin, G.; Song, Q. S.; Yang, H.
2009-01-01
This work focuses on finding optimal barrier policy for an insurance risk model when the dividends are paid to the share holders according to a barrier strategy. A new approach based on stochastic optimization methods is developed. Compared with the existing results in the literature, more general surplus processes are considered. Precise models of the surplus need not be known; only noise-corrupted observations of the dividends are used. Using barrier-type strategies, a class of stochastic optimization algorithms are developed. Convergence of the algorithm is analyzed; rate of convergence is also provided. Numerical results are reported to demonstrate the performance of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan
2010-01-01
For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Gunzburger, Max
2017-06-01
Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
Sparse Learning with Stochastic Composite Optimization.
Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei
2017-06-01
In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).
Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.
Chang, Joshua; Paydarfar, David
2014-12-01
Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.
Bounded-Degree Approximations of Stochastic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less
Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley
2013-07-08
The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.
Optimal Control of Hybrid Systems in Air Traffic Applications
NASA Astrophysics Data System (ADS)
Kamgarpour, Maryam
Growing concerns over the scalability of air traffic operations, air transportation fuel emissions and prices, as well as the advent of communication and sensing technologies motivate improvements to the air traffic management system. To address such improvements, in this thesis a hybrid dynamical model as an abstraction of the air traffic system is considered. Wind and hazardous weather impacts are included using a stochastic model. This thesis focuses on the design of algorithms for verification and control of hybrid and stochastic dynamical systems and the application of these algorithms to air traffic management problems. In the deterministic setting, a numerically efficient algorithm for optimal control of hybrid systems is proposed based on extensions of classical optimal control techniques. This algorithm is applied to optimize the trajectory of an Airbus 320 aircraft in the presence of wind and storms. In the stochastic setting, the verification problem of reaching a target set while avoiding obstacles (reach-avoid) is formulated as a two-player game to account for external agents' influence on system dynamics. The solution approach is applied to air traffic conflict prediction in the presence of stochastic wind. Due to the uncertainty in forecasts of the hazardous weather, and hence the unsafe regions of airspace for aircraft flight, the reach-avoid framework is extended to account for stochastic target and safe sets. This methodology is used to maximize the probability of the safety of aircraft paths through hazardous weather. Finally, the problem of modeling and optimization of arrival air traffic and runway configuration in dense airspace subject to stochastic weather data is addressed. This problem is formulated as a hybrid optimal control problem and is solved with a hierarchical approach that decouples safety and performance. As illustrated with this problem, the large scale of air traffic operations motivates future work on the efficient implementation of the proposed algorithms.
NASA Astrophysics Data System (ADS)
Zakynthinaki, M. S.; Stirling, J. R.
2007-01-01
Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.
2012-01-01
Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742
Multiscale stochastic simulations of chemical reactions with regulated scale separation
NASA Astrophysics Data System (ADS)
Koumoutsakos, Petros; Feigelman, Justin
2013-07-01
We present a coupling of multiscale frameworks with accelerated stochastic simulation algorithms for systems of chemical reactions with disparate propensities. The algorithms regulate the propensities of the fast and slow reactions of the system, using alternating micro and macro sub-steps simulated with accelerated algorithms such as τ and R-leaping. The proposed algorithms are shown to provide significant speedups in simulations of stiff systems of chemical reactions with a trade-off in accuracy as controlled by a regulating parameter. More importantly, the error of the methods exhibits a cutoff phenomenon that allows for optimal parameter choices. Numerical experiments demonstrate that hybrid algorithms involving accelerated stochastic simulations can be, in certain cases, more accurate while faster, than their corresponding stochastic simulation algorithm counterparts.
Stochastic Robust Mathematical Programming Model for Power System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Cong; Changhyeok, Lee; Haoyong, Chen
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Xiang, Suyun; Wang, Wei; Xia, Jia; Xiang, Bingren; Ouyang, Pingkai
2009-09-01
The stochastic resonance algorithm is applied to the trace analysis of alkyl halides and alkyl benzenes in water samples. Compared to encountering a single signal when applying the algorithm, the optimization of system parameters for a multicomponent is more complex. In this article, the resolution of adjacent chromatographic peaks is first involved in the optimization of parameters. With the optimized parameters, the algorithm gave an ideal output with good resolution as well as enhanced signal-to-noise ratio. Applying the enhanced signals, the method extended the limit of detection and exhibited good linearity, which ensures accurate determination of the multicomponent.
Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation
Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir
2016-05-01
We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.
Control of Finite-State, Finite Memory Stochastic Systems
NASA Technical Reports Server (NTRS)
Sandell, Nils R.
1974-01-01
A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
Stochastic modelling of turbulent combustion for design optimization of gas turbine combustors
NASA Astrophysics Data System (ADS)
Mehanna Ismail, Mohammed Ali
The present work covers the development and the implementation of an efficient algorithm for the design optimization of gas turbine combustors. The purpose is to explore the possibilities and indicate constructive suggestions for optimization techniques as alternative methods for designing gas turbine combustors. The algorithm is general to the extent that no constraints are imposed on the combustion phenomena or on the combustor configuration. The optimization problem is broken down into two elementary problems: the first is the optimum search algorithm, and the second is the turbulent combustion model used to determine the combustor performance parameters. These performance parameters constitute the objective and physical constraints in the optimization problem formulation. The examination of both turbulent combustion phenomena and the gas turbine design process suggests that the turbulent combustion model represents a crucial part of the optimization algorithm. The basic requirements needed for a turbulent combustion model to be successfully used in a practical optimization algorithm are discussed. In principle, the combustion model should comply with the conflicting requirements of high fidelity, robustness and computational efficiency. To that end, the problem of turbulent combustion is discussed and the current state of the art of turbulent combustion modelling is reviewed. According to this review, turbulent combustion models based on the composition PDF transport equation are found to be good candidates for application in the present context. However, these models are computationally expensive. To overcome this difficulty, two different models based on the composition PDF transport equation were developed: an improved Lagrangian Monte Carlo composition PDF algorithm and the generalized stochastic reactor model. Improvements in the Lagrangian Monte Carlo composition PDF model performance and its computational efficiency were achieved through the implementation of time splitting, variable stochastic fluid particle mass control, and a second order time accurate (predictor-corrector) scheme used for solving the stochastic differential equations governing the particles evolution. The model compared well against experimental data found in the literature for two different configurations: bluff body and swirl stabilized combustors. The generalized stochastic reactor is a newly developed model. This model relies on the generalization of the concept of the classical stochastic reactor theory in the sense that it accounts for both finite micro- and macro-mixing processes. (Abstract shortened by UMI.)
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
Control Improvement for Jump-Diffusion Processes with Applications to Finance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baeuerle, Nicole, E-mail: nicole.baeuerle@kit.edu; Rieder, Ulrich, E-mail: ulrich.rieder@uni-ulm.de
2012-02-15
We consider stochastic control problems with jump-diffusion processes and formulate an algorithm which produces, starting from a given admissible control {pi}, a new control with a better value. If no improvement is possible, then {pi} is optimal. Such an algorithm is well-known for discrete-time Markov Decision Problems under the name Howard's policy improvement algorithm. The idea can be traced back to Bellman. Here we show with the help of martingale techniques that such an algorithm can also be formulated for stochastic control problems with jump-diffusion processes. As an application we derive some interesting results in financial portfolio optimization.
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
Online POMDP Algorithms for Very Large Observation Spaces
2017-06-06
stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015. • Luo, Yuanfu, Haoyu Bai...and Wee Sun Lee. "Adaptive stochastic optimization: From sets to paths." In Advances in Neural Information Processing Systems, pp. 1585- 1593 . 2015
NASA Astrophysics Data System (ADS)
Xiang, Suyun; Wang, Wei; Xiang, Bingren; Deng, Haishan; Xie, Shaofei
2007-05-01
The periodic modulation-based stochastic resonance algorithm (PSRA) was used to amplify and detect the weak liquid chromatography-mass spectrometry (LC-MS) signal of granisetron in plasma. In the algorithm, the stochastic resonance (SR) was achieved by introducing an external periodic force to the nonlinear system. The optimization of parameters was carried out in two steps to give attention to both the signal-to-noise ratio (S/N) and the peak shape of output signal. By applying PSRA with the optimized parameters, the signal-to-noise ratio of LC-MS peak was enhanced significantly and distorted peak shape that often appeared in the traditional stochastic resonance algorithm was corrected by the added periodic force. Using the signals enhanced by PSRA, this method extended the limit of detection (LOD) and limit of quantification (LOQ) of granisetron in plasma from 0.05 and 0.2 ng/mL, respectively, to 0.01 and 0.02 ng/mL, and exhibited good linearity, accuracy and precision, which ensure accurate determination of the target analyte.
Perspective: Stochastic magnetic devices for cognitive computing
NASA Astrophysics Data System (ADS)
Roy, Kaushik; Sengupta, Abhronil; Shim, Yong
2018-06-01
Stochastic switching of nanomagnets can potentially enable probabilistic cognitive hardware consisting of noisy neural and synaptic components. Furthermore, computational paradigms inspired from the Ising computing model require stochasticity for achieving near-optimality in solutions to various types of combinatorial optimization problems such as the Graph Coloring Problem or the Travelling Salesman Problem. Achieving optimal solutions in such problems are computationally exhaustive and requires natural annealing to arrive at the near-optimal solutions. Stochastic switching of devices also finds use in applications involving Deep Belief Networks and Bayesian Inference. In this article, we provide a multi-disciplinary perspective across the stack of devices, circuits, and algorithms to illustrate how the stochastic switching dynamics of spintronic devices in the presence of thermal noise can provide a direct mapping to the computational units of such probabilistic intelligent systems.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Genetic evolutionary taboo search for optimal marker placement in infrared patient setup
NASA Astrophysics Data System (ADS)
Riboldi, M.; Baroni, G.; Spadea, M. F.; Tagaste, B.; Garibaldi, C.; Cambria, R.; Orecchia, R.; Pedotti, A.
2007-09-01
In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process.
NASA Astrophysics Data System (ADS)
Panda, Satyasen
2018-05-01
This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.
Simulation-based planning for theater air warfare
NASA Astrophysics Data System (ADS)
Popken, Douglas A.; Cox, Louis A., Jr.
2004-08-01
Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.
Ant Lion Optimization algorithm for kidney exchanges.
Hamouda, Eslam; El-Metwally, Sara; Tarek, Mayada
2018-01-01
The kidney exchange programs bring new insights in the field of organ transplantation. They make the previously not allowed surgery of incompatible patient-donor pairs easier to be performed on a large scale. Mathematically, the kidney exchange is an optimization problem for the number of possible exchanges among the incompatible pairs in a given pool. Also, the optimization modeling should consider the expected quality-adjusted life of transplant candidates and the shortage of computational and operational hospital resources. In this article, we introduce a bio-inspired stochastic-based Ant Lion Optimization, ALO, algorithm to the kidney exchange space to maximize the number of feasible cycles and chains among the pool pairs. Ant Lion Optimizer-based program achieves comparable kidney exchange results to the deterministic-based approaches like integer programming. Also, ALO outperforms other stochastic-based methods such as Genetic Algorithm in terms of the efficient usage of computational resources and the quantity of resulting exchanges. Ant Lion Optimization algorithm can be adopted easily for on-line exchanges and the integration of weights for hard-to-match patients, which will improve the future decisions of kidney exchange programs. A reference implementation for ALO algorithm for kidney exchanges is written in MATLAB and is GPL licensed. It is available as free open-source software from: https://github.com/SaraEl-Metwally/ALO_algorithm_for_Kidney_Exchanges.
NASA Astrophysics Data System (ADS)
Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun
2018-07-01
Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.
Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise
NASA Astrophysics Data System (ADS)
Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej
2010-11-01
The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.
Wehmeyer, Christoph; Falk von Rudorff, Guido; Wolf, Sebastian; Kabbe, Gabriel; Schärf, Daniel; Kühne, Thomas D; Sebastiani, Daniel
2012-11-21
We present a stochastic, swarm intelligence-based optimization algorithm for the prediction of global minima on potential energy surfaces of molecular cluster structures. Our optimization approach is a modification of the artificial bee colony (ABC) algorithm which is inspired by the foraging behavior of honey bees. We apply our modified ABC algorithm to the problem of global geometry optimization of molecular cluster structures and show its performance for clusters with 2-57 particles and different interatomic interaction potentials.
NASA Astrophysics Data System (ADS)
Wehmeyer, Christoph; Falk von Rudorff, Guido; Wolf, Sebastian; Kabbe, Gabriel; Schärf, Daniel; Kühne, Thomas D.; Sebastiani, Daniel
2012-11-01
We present a stochastic, swarm intelligence-based optimization algorithm for the prediction of global minima on potential energy surfaces of molecular cluster structures. Our optimization approach is a modification of the artificial bee colony (ABC) algorithm which is inspired by the foraging behavior of honey bees. We apply our modified ABC algorithm to the problem of global geometry optimization of molecular cluster structures and show its performance for clusters with 2-57 particles and different interatomic interaction potentials.
A Pumping Algorithm for Ergodic Stochastic Mean Payoff Games with Perfect Information
NASA Astrophysics Data System (ADS)
Boros, Endre; Elbassioni, Khaled; Gurvich, Vladimir; Makino, Kazuhisa
In this paper, we consider two-person zero-sum stochastic mean payoff games with perfect information, or BWR-games, given by a digraph G = (V = V B ∪ V W ∪ V R , E), with local rewards r: E to { R}, and three types of vertices: black V B , white V W , and random V R . The game is played by two players, White and Black: When the play is at a white (black) vertex v, White (Black) selects an outgoing arc (v,u). When the play is at a random vertex v, a vertex u is picked with the given probability p(v,u). In all cases, Black pays White the value r(v,u). The play continues forever, and White aims to maximize (Black aims to minimize) the limiting mean (that is, average) payoff. It was recently shown in [7] that BWR-games are polynomially equivalent with the classical Gillette games, which include many well-known subclasses, such as cyclic games, simple stochastic games (SSG's), stochastic parity games, and Markov decision processes. In this paper, we give a new algorithm for solving BWR-games in the ergodic case, that is when the optimal values do not depend on the initial position. Our algorithm solves a BWR-game by reducing it, using a potential transformation, to a canonical form in which the optimal strategies of both players and the value for every initial position are obvious, since a locally optimal move in it is optimal in the whole game. We show that this algorithm is pseudo-polynomial when the number of random nodes is constant. We also provide an almost matching lower bound on its running time, and show that this bound holds for a wider class of algorithms. Let us add that the general (non-ergodic) case is at least as hard as SSG's, for which no pseudo-polynomial algorithm is known.
Finding optimal vaccination strategies for pandemic influenza using genetic algorithms.
Patel, Rajan; Longini, Ira M; Halloran, M Elizabeth
2005-05-21
In the event of pandemic influenza, only limited supplies of vaccine may be available. We use stochastic epidemic simulations, genetic algorithms (GA), and random mutation hill climbing (RMHC) to find optimal vaccine distributions to minimize the number of illnesses or deaths in the population, given limited quantities of vaccine. Due to the non-linearity, complexity and stochasticity of the epidemic process, it is not possible to solve for optimal vaccine distributions mathematically. However, we use GA and RMHC to find near optimal vaccine distributions. We model an influenza pandemic that has age-specific illness attack rates similar to the Asian pandemic in 1957-1958 caused by influenza A(H2N2), as well as a distribution similar to the Hong Kong pandemic in 1968-1969 caused by influenza A(H3N2). We find the optimal vaccine distributions given that the number of doses is limited over the range of 10-90% of the population. While GA and RMHC work well in finding optimal vaccine distributions, GA is significantly more efficient than RMHC. We show that the optimal vaccine distribution found by GA and RMHC is up to 84% more effective than random mass vaccination in the mid range of vaccine availability. GA is generalizable to the optimization of stochastic model parameters for other infectious diseases and population structures.
Optimization for Service Routes of Pallet Service Center Based on the Pallet Pool Mode
He, Shiwei; Song, Rui
2016-01-01
Service routes optimization (SRO) of pallet service center should meet customers' demand firstly and then, through the reasonable method of lines organization, realize the shortest path of vehicle driving. The routes optimization of pallet service center is similar to the distribution problems of vehicle routing problem (VRP) and Chinese postman problem (CPP), but it has its own characteristics. Based on the relevant research results, the conditions of determining the number of vehicles, the one way of the route, the constraints of loading, and time windows are fully considered, and a chance constrained programming model with stochastic constraints is constructed taking the shortest path of all vehicles for a delivering (recycling) operation as an objective. For the characteristics of the model, a hybrid intelligent algorithm including stochastic simulation, neural network, and immune clonal algorithm is designed to solve the model. Finally, the validity and rationality of the optimization model and algorithm are verified by the case. PMID:27528865
Stochastic HKMDHE: A multi-objective contrast enhancement algorithm
NASA Astrophysics Data System (ADS)
Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2018-02-01
This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.
A theoretical comparison of evolutionary algorithms and simulated annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
DOT National Transportation Integrated Search
2003-01-01
This study evaluated existing traffic signal optimization programs including Synchro,TRANSYT-7F, and genetic algorithm optimization using real-world data collected in Virginia. As a first step, a microscopic simulation model, VISSIM, was extensively ...
Optimization of contrast resolution by genetic algorithm in ultrasound tissue harmonic imaging.
Ménigot, Sébastien; Girault, Jean-Marc
2016-09-01
The development of ultrasound imaging techniques such as pulse inversion has improved tissue harmonic imaging. Nevertheless, no recommendation has been made to date for the design of the waveform transmitted through the medium being explored. Our aim was therefore to find automatically the optimal "imaging" wave which maximized the contrast resolution without a priori information. To overcome assumption regarding the waveform, a genetic algorithm investigated the medium thanks to the transmission of stochastic "explorer" waves. Moreover, these stochastic signals could be constrained by the type of generator available (bipolar or arbitrary). To implement it, we changed the current pulse inversion imaging system by including feedback. Thus the method optimized the contrast resolution by adaptively selecting the samples of the excitation. In simulation, we benchmarked the contrast effectiveness of the best found transmitted stochastic commands and the usual fixed-frequency command. The optimization method converged quickly after around 300 iterations in the same optimal area. These results were confirmed experimentally. In the experimental case, the contrast resolution measured on a radiofrequency line could be improved by 6% with a bipolar generator and it could still increase by 15% with an arbitrary waveform generator. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung
2017-04-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung
2017-01-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617
Automated Flight Routing Using Stochastic Dynamic Programming
NASA Technical Reports Server (NTRS)
Ng, Hok K.; Morando, Alex; Grabbe, Shon
2010-01-01
Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.
Stochastic Models of Plant Diversity: Application to White Sands Missile Range
2000-02-01
decades and its models have been well developed. These models fall in the categories: dynamic models and stochastic models. In their book , Modeling...Gelb 1974), and dendro- climatology (Visser and Molenaar 1988). Optimal Estimation An optimal estimator is a computational algorithm that...Evaluation, M.B. Usher, ed., Chapman and Hall, London. Visser, H., and J. Molenaar . 1990. "Estimating Trends in Tree-ring Data." For. Sei. 36(1): 87
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-10
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomlymore » switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qichun; Zhou, Jinglin; Wang, Hong
In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.
Nguyen, A; Yosinski, J; Clune, J
2016-01-01
The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.
SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry
NASA Astrophysics Data System (ADS)
Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.
2007-03-01
A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.
Optimization of High-Dimensional Functions through Hypercube Evaluation
Abiyev, Rahib H.; Tunay, Mustafa
2015-01-01
A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237
NASA Astrophysics Data System (ADS)
Zhang, Kai; Li, Jingzhi; He, Zhubin; Yan, Wanfeng
2018-07-01
In this paper, a stochastic optimization framework is proposed to address the microgrid energy dispatching problem with random renewable generation and vehicle activity pattern, which is closer to the practical applications. The patterns of energy generation, consumption and storage availability are all random and unknown at the beginning, and the microgrid controller design (MCD) is formulated as a Markov decision process (MDP). Hence, an online learning-based control algorithm is proposed for the microgrid, which could adapt the control policy with increasing knowledge of the system dynamics and converges to the optimal algorithm. We adopt the linear approximation idea to decompose the original value functions as the summation of each per-battery value function. As a consequence, the computational complexity is significantly reduced from exponential growth to linear growth with respect to the size of battery states. Monte Carlo simulation of different scenarios demonstrates the effectiveness and efficiency of our algorithm.
NASA Astrophysics Data System (ADS)
Haberko, Jakub; Wasylczyk, Piotr
2018-03-01
We demonstrate that a stochastic optimization algorithm with a properly chosen, weighted fitness function, following a global variation of parameters upon each step can be used to effectively design reflective polarizing optical elements. Two sub-wavelength metallic metasurfaces, corresponding to broadband half- and quarter-waveplates are demonstrated with simple structure topology, a uniform metallic coating and with the design suited for the currently available microfabrication techniques, such as ion milling or 3D printing.
Adaptive control of stochastic linear systems with unknown parameters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ku, R. T.
1972-01-01
The problem of optimal control of linear discrete-time stochastic dynamical system with unknown and, possibly, stochastically varying parameters is considered on the basis of noisy measurements. It is desired to minimize the expected value of a quadratic cost functional. Since the simultaneous estimation of the state and plant parameters is a nonlinear filtering problem, the extended Kalman filter algorithm is used. Several qualitative and asymptotic properties of the open loop feedback optimal control and the enforced separation scheme are discussed. Simulation results via Monte Carlo method show that, in terms of the performance measure, for stable systems the open loop feedback optimal control system is slightly better than the enforced separation scheme, while for unstable systems the latter scheme is far better.
Algebraic, geometric, and stochastic aspects of genetic operators
NASA Technical Reports Server (NTRS)
Foo, N. Y.; Bosworth, J. L.
1972-01-01
Genetic algorithms for function optimization employ genetic operators patterned after those observed in search strategies employed in natural adaptation. Two of these operators, crossover and inversion, are interpreted in terms of their algebraic and geometric properties. Stochastic models of the operators are developed which are employed in Monte Carlo simulations of their behavior.
Glick, Meir; Rayan, Anwar; Goldblum, Amiram
2002-01-01
The problem of global optimization is pivotal in a variety of scientific fields. Here, we present a robust stochastic search method that is able to find the global minimum for a given cost function, as well as, in most cases, any number of best solutions for very large combinatorial “explosive” systems. The algorithm iteratively eliminates variable values that contribute consistently to the highest end of a cost function's spectrum of values for the full system. Values that have not been eliminated are retained for a full, exhaustive search, allowing the creation of an ordered population of best solutions, which includes the global minimum. We demonstrate the ability of the algorithm to explore the conformational space of side chains in eight proteins, with 54 to 263 residues, to reproduce a population of their low energy conformations. The 1,000 lowest energy solutions are identical in the stochastic (with two different seed numbers) and full, exhaustive searches for six of eight proteins. The others retain the lowest 141 and 213 (of 1,000) conformations, depending on the seed number, and the maximal difference between stochastic and exhaustive is only about 0.15 Kcal/mol. The energy gap between the lowest and highest of the 1,000 low-energy conformers in eight proteins is between 0.55 and 3.64 Kcal/mol. This algorithm offers real opportunities for solving problems of high complexity in structural biology and in other fields of science and technology. PMID:11792838
Operation of Power Grids with High Penetration of Wind Power
NASA Astrophysics Data System (ADS)
Al-Awami, Ali Taleb
The integration of wind power into the power grid poses many challenges due to its highly uncertain nature. This dissertation involves two main components related to the operation of power grids with high penetration of wind energy: wind-thermal stochastic dispatch and wind-thermal coordinated bidding in short-term electricity markets. In the first part, a stochastic dispatch (SD) algorithm is proposed that takes into account the stochastic nature of the wind power output. The uncertainty associated with wind power output given the forecast is characterized using conditional probability density functions (CPDF). Several functions are examined to characterize wind uncertainty including Beta, Weibull, Extreme Value, Generalized Extreme Value, and Mixed Gaussian distributions. The unique characteristics of the Mixed Gaussian distribution are then utilized to facilitate the speed of convergence of the SD algorithm. A case study is carried out to evaluate the effectiveness of the proposed algorithm. Then, the SD algorithm is extended to simultaneously optimize the system operating costs and emissions. A modified multi-objective particle swarm optimization algorithm is suggested to identify the Pareto-optimal solutions defined by the two conflicting objectives. A sensitivity analysis is carried out to study the effect of changing load level and imbalance cost factors on the Pareto front. In the second part of this dissertation, coordinated trading of wind and thermal energy is proposed to mitigate risks due to those uncertainties. The problem of wind-thermal coordinated trading is formulated as a mixed-integer stochastic linear program. The objective is to obtain the optimal tradeoff bidding strategy that maximizes the total expected profits while controlling trading risks. For risk control, a weighted term of the conditional value at risk (CVaR) is included in the objective function. The CVaR aims to maximize the expected profits of the least profitable scenarios, thus improving trading risk control. A case study comparing coordinated with uncoordinated bidding strategies depending on the trader's risk attitude is included. Simulation results show that coordinated bidding can improve the expected profits while significantly improving the CVaR.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
NASA Astrophysics Data System (ADS)
Wei, Lin-Yang; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming
2016-11-01
Inverse estimation of the refractive index distribution in one-dimensional participating media with graded refractive index (GRI) is investigated. The forward radiative transfer problem is solved by the Chebyshev collocation spectral method. The stochastic particle swarm optimization (SPSO) algorithm is employed to retrieve three kinds of GRI distribution, i.e. the linear, sinusoidal and quadratic GRI distribution. The retrieval accuracy of GRI distribution with different wall emissivity, optical thickness, absorption coefficients and scattering coefficients are discussed thoroughly. To improve the retrieval accuracy of quadratic GRI distribution, a double-layer model is proposed to supply more measurement information. The influence of measurement errors upon the precision of estimated results is also investigated. Considering the GRI distribution is unknown beforehand in practice, a quadratic function is employed to retrieve the linear GRI by SPSO algorithm. All the results show that the SPSO algorithm is applicable to retrieve different GRI distributions in participating media accurately even with noisy data.
Strategic planning for disaster recovery with stochastic last mile distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell Whitford; Van Hentenryck, Pascal; Coffrin, Carleton
2010-01-01
This paper considers the single commodity allocation problem (SCAP) for disaster recovery, a fundamental problem faced by all populated areas. SCAPs are complex stochastic optimization problems that combine resource allocation, warehouse routing, and parallel fleet routing. Moreover, these problems must be solved under tight runtime constraints to be practical in real-world disaster situations. This paper formalizes the specification of SCAPs and introduces a novel multi-stage hybrid-optimization algorithm that utilizes the strengths of mixed integer programming, constraint programming, and large neighborhood search. The algorithm was validated on hurricane disaster scenarios generated by Los Alamos National Laboratory using state-of-the-art disaster simulation toolsmore » and is deployed to aid federal organizations in the US.« less
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
Wang, Lin; Qu, Hui; Liu, Shan; Dun, Cai-xia
2013-01-01
As a practical inventory and transportation problem, it is important to synthesize several objectives for the joint replenishment and delivery (JRD) decision. In this paper, a new multiobjective stochastic JRD (MSJRD) of the one-warehouse and n-retailer systems considering the balance of service level and total cost simultaneously is proposed. The goal of this problem is to decide the reasonable replenishment interval, safety stock factor, and traveling routing. Secondly, two approaches are designed to handle this complex multi-objective optimization problem. Linear programming (LP) approach converts the multi-objective to single objective, while a multi-objective evolution algorithm (MOEA) solves a multi-objective problem directly. Thirdly, three intelligent optimization algorithms, differential evolution algorithm (DE), hybrid DE (HDE), and genetic algorithm (GA), are utilized in LP-based and MOEA-based approaches. Results of the MSJRD with LP-based and MOEA-based approaches are compared by a contrastive numerical example. To analyses the nondominated solution of MOEA, a metric is also used to measure the distribution of the last generation solution. Results show that HDE outperforms DE and GA whenever LP or MOEA is adopted.
Dun, Cai-xia
2013-01-01
As a practical inventory and transportation problem, it is important to synthesize several objectives for the joint replenishment and delivery (JRD) decision. In this paper, a new multiobjective stochastic JRD (MSJRD) of the one-warehouse and n-retailer systems considering the balance of service level and total cost simultaneously is proposed. The goal of this problem is to decide the reasonable replenishment interval, safety stock factor, and traveling routing. Secondly, two approaches are designed to handle this complex multi-objective optimization problem. Linear programming (LP) approach converts the multi-objective to single objective, while a multi-objective evolution algorithm (MOEA) solves a multi-objective problem directly. Thirdly, three intelligent optimization algorithms, differential evolution algorithm (DE), hybrid DE (HDE), and genetic algorithm (GA), are utilized in LP-based and MOEA-based approaches. Results of the MSJRD with LP-based and MOEA-based approaches are compared by a contrastive numerical example. To analyses the nondominated solution of MOEA, a metric is also used to measure the distribution of the last generation solution. Results show that HDE outperforms DE and GA whenever LP or MOEA is adopted. PMID:24302880
Evolving cell models for systems and synthetic biology.
Cao, Hongqing; Romero-Campero, Francisco J; Heeb, Stephan; Cámara, Miguel; Krasnogor, Natalio
2010-03-01
This paper proposes a new methodology for the automated design of cell models for systems and synthetic biology. Our modelling framework is based on P systems, a discrete, stochastic and modular formal modelling language. The automated design of biological models comprising the optimization of the model structure and its stochastic kinetic constants is performed using an evolutionary algorithm. The evolutionary algorithm evolves model structures by combining different modules taken from a predefined module library and then it fine-tunes the associated stochastic kinetic constants. We investigate four alternative objective functions for the fitness calculation within the evolutionary algorithm: (1) equally weighted sum method, (2) normalization method, (3) randomly weighted sum method, and (4) equally weighted product method. The effectiveness of the methodology is tested on four case studies of increasing complexity including negative and positive autoregulation as well as two gene networks implementing a pulse generator and a bandwidth detector. We provide a systematic analysis of the evolutionary algorithm's results as well as of the resulting evolved cell models.
Lee, Jong-Seok; Park, Cheol Hoon
2010-08-01
We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.
NASA Astrophysics Data System (ADS)
Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz
2015-02-01
In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.
Optimizing event selection with the random grid search
NASA Astrophysics Data System (ADS)
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip
2018-07-01
The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Particle Swarm Optimization algorithms for geophysical inversion, practical hints
NASA Astrophysics Data System (ADS)
Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.
2008-12-01
PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.
Pricing of swing options: A Monte Carlo simulation approach
NASA Astrophysics Data System (ADS)
Leow, Kai-Siong
We study the problem of pricing swing options, a class of multiple early exercise options that are traded in energy market, particularly in the electricity and natural gas markets. These contracts permit the option holder to periodically exercise the right to trade a variable amount of energy with a counterparty, subject to local volumetric constraints. In addition, the total amount of energy traded from settlement to expiration with the counterparty is restricted by a global volumetric constraint. Violation of this global volumetric constraint is allowed but would lead to penalty settled at expiration. The pricing problem is formulated as a stochastic optimal control problem in discrete time and state space. We present a stochastic dynamic programming algorithm which is based on piecewise linear concave approximation of value functions. This algorithm yields the value of the swing option under the assumption that the optimal exercise policy is applied by the option holder. We present a proof of an almost sure convergence that the algorithm generates the optimal exercise strategy as the number of iterations approaches to infinity. Finally, we provide a numerical example for pricing a natural gas swing call option.
Schiffmann, Christoph; Sebastiani, Daniel
2011-05-10
We present an algorithmic extension of a numerical optimization scheme for analytic capping potentials for use in mixed quantum-classical (quantum mechanical/molecular mechanical, QM/MM) ab initio calculations. Our goal is to minimize bond-cleavage-induced perturbations in the electronic structure, measured by means of a suitable penalty functional. The optimization algorithm-a variant of the artificial bee colony (ABC) algorithm, which relies on swarm intelligence-couples deterministic (downhill gradient) and stochastic elements to avoid local minimum trapping. The ABC algorithm outperforms the conventional downhill gradient approach, if the penalty hypersurface exhibits wiggles that prevent a straight minimization pathway. We characterize the optimized capping potentials by computing NMR chemical shifts. This approach will increase the accuracy of QM/MM calculations of complex biomolecules.
Algorithms for output feedback, multiple-model, and decentralized control problems
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
NASA Astrophysics Data System (ADS)
Bhosale, Parag; Staring, Marius; Al-Ars, Zaid; Berendsen, Floris F.
2018-03-01
Currently, non-rigid image registration algorithms are too computationally intensive to use in time-critical applications. Existing implementations that focus on speed typically address this by either parallelization on GPU-hardware, or by introducing methodically novel techniques into CPU-oriented algorithms. Stochastic gradient descent (SGD) optimization and variations thereof have proven to drastically reduce the computational burden for CPU-based image registration, but have not been successfully applied in GPU hardware due to its stochastic nature. This paper proposes 1) NiftyRegSGD, a SGD optimization for the GPU-based image registration tool NiftyReg, 2) random chunk sampler, a new random sampling strategy that better utilizes the memory bandwidth of GPU hardware. Experiments have been performed on 3D lung CT data of 19 patients, which compared NiftyRegSGD (with and without random chunk sampler) with CPU-based elastix Fast Adaptive SGD (FASGD) and NiftyReg. The registration runtime was 21.5s, 4.4s and 2.8s for elastix-FASGD, NiftyRegSGD without, and NiftyRegSGD with random chunk sampling, respectively, while similar accuracy was obtained. Our method is publicly available at https://github.com/SuperElastix/NiftyRegSGD.
Leveraging human decision making through the optimal management of centralized resources
NASA Astrophysics Data System (ADS)
Hyden, Paul; McGrath, Richard G.
2016-05-01
Combining results from mixed integer optimization, stochastic modeling and queuing theory, we will advance the interdisciplinary problem of efficiently and effectively allocating centrally managed resources. Academia currently fails to address this, as the esoteric demands of each of these large research areas limits work across traditional boundaries. The commercial space does not currently address these challenges due to the absence of a profit metric. By constructing algorithms that explicitly use inputs across boundaries, we are able to incorporate the advantages of using human decision makers. Key improvements in the underlying algorithms are made possible by aligning decision maker goals with the feedback loops introduced between the core optimization step and the modeling of the overall stochastic process of supply and demand. A key observation is that human decision-makers must be explicitly included in the analysis for these approaches to be ultimately successful. Transformative access gives warfighters and mission owners greater understanding of global needs and allows for relationships to guide optimal resource allocation decisions. Mastery of demand processes and optimization bottlenecks reveals long term maximum marginal utility gaps in capabilities.
NASA Astrophysics Data System (ADS)
He, Yaoyao; Yang, Shanlin; Xu, Qifa
2013-07-01
In order to solve the model of short-term cascaded hydroelectric system scheduling, a novel chaotic particle swarm optimization (CPSO) algorithm using improved logistic map is introduced, which uses the water discharge as the decision variables combined with the death penalty function. According to the principle of maximum power generation, the proposed approach makes use of the ergodicity, symmetry and stochastic property of improved logistic chaotic map for enhancing the performance of particle swarm optimization (PSO) algorithm. The new hybrid method has been examined and tested on two test functions and a practical cascaded hydroelectric system. The experimental results show that the effectiveness and robustness of the proposed CPSO algorithm in comparison with other traditional algorithms.
Optimizing event selection with the random grid search
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...
2018-02-27
In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less
Optimizing Event Selection with the Random Grid Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen
2017-06-29
The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less
Optimizing event selection with the random grid search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen
In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Composite Particle Swarm Optimizer With Historical Memory for Function Optimization.
Li, Jie; Zhang, JunQi; Jiang, ChangJun; Zhou, MengChu
2015-10-01
Particle swarm optimization (PSO) algorithm is a population-based stochastic optimization technique. It is characterized by the collaborative search in which each particle is attracted toward the global best position (gbest) in the swarm and its own best position (pbest). However, all of particles' historical promising pbests in PSO are lost except their current pbests. In order to solve this problem, this paper proposes a novel composite PSO algorithm, called historical memory-based PSO (HMPSO), which uses an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising pbests. Each particle has three candidate positions, which are generated from the historical memory, particles' current pbests, and the swarm's gbest. Then the best candidate position is adopted. Experiments on 28 CEC2013 benchmark functions demonstrate the superiority of HMPSO over other algorithms.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks.
Zhang, Jing; Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-09-15
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity.
Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks
Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho
2017-01-01
In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity. PMID:28914818
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms ofmore » a suggested framework model based on discrete event simulation.« less
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
NASA Technical Reports Server (NTRS)
Eyre, Francis B. (Inventor); Fink, Wolfgang (Inventor)
2011-01-01
Disclosed herein is a method of making a three dimensional mold comprising the steps of providing a mold substrate; exposing the substrate with an electromagnetic radiation source for a period of time sufficient to render the portion of the mold substrate susceptible to a developer to produce a modified mold substrate; and developing the modified mold with one or more developing reagents to remove the portion of the mold substrate rendered susceptible to the developer from the mold substrate, to produce the mold having a desired mold shape, wherein the electromagnetic radiation source has a fixed position, and wherein during the exposing step, the mold substrate is manipulated according to a manipulation algorithm in one or more dimensions relative to the electromagnetic radiation source; and wherein the manipulation algorithm is determined using stochastic optimization computations.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
Scheduling Earth Observing Satellites with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.
Tehrani, Kayvan F.; Zhang, Yiwen; Shen, Ping; Kner, Peter
2017-01-01
Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm. PMID:29188105
Tehrani, Kayvan F; Zhang, Yiwen; Shen, Ping; Kner, Peter
2017-11-01
Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm.
Optimal route discovery for soft QOS provisioning in mobile ad hoc multimedia networks
NASA Astrophysics Data System (ADS)
Huang, Lei; Pan, Feng
2007-09-01
In this paper, we propose an optimal routing discovery algorithm for ad hoc multimedia networks whose resource keeps changing, First, we use stochastic models to measure the network resource availability, based on the information about the location and moving pattern of the nodes, as well as the link conditions between neighboring nodes. Then, for a certain multimedia packet flow to be transmitted from a source to a destination, we formulate the optimal soft-QoS provisioning problem as to find the best route that maximize the probability of satisfying its desired QoS requirements in terms of the maximum delay constraints. Based on the stochastic network resource model, we developed three approaches to solve the formulated problem: A centralized approach serving as the theoretical reference, a distributed approach that is more suitable to practical real-time deployment, and a distributed dynamic approach that utilizes the updated time information to optimize the routing for each individual packet. Examples of numerical results demonstrated that using the route discovered by our distributed algorithm in a changing network environment, multimedia applications could achieve better QoS statistically.
Multivariable optimization of an auto-thermal ammonia synthesis reactor using genetic algorithm
NASA Astrophysics Data System (ADS)
Anh-Nga, Nguyen T.; Tuan-Anh, Nguyen; Tien-Dung, Vu; Kim-Trung, Nguyen
2017-09-01
The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone. The optimal problem requires the maximization of a multivariable objective function which subjects to a number of equality constraints involving the solution of coupled differential equations and also inequality constraints. The solution of an optimization problem can be found through, among others, deterministic or stochastic approaches. The stochastic methods, such as evolutionary algorithm (EA), which is based on natural phenomenon, can overcome the drawbacks such as the requirement of the derivatives of the objective function and/or constraints, or being not efficient in non-differentiable or discontinuous problems. Genetic algorithm (GA) which is a class of EA, exceptionally simple, robust at numerical optimization and is more likely to find a true global optimum. In this study, the genetic algorithm is employed to find the optimum profit of the process. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results showed that the presented numerical method could be applied to model the ammonia synthesis reactor. The optimum economic profit obtained from this study are also compared to the results from the literature. It suggests that the process should be operated at higher temperature of feed gas in catalyst zone and the reactor length is slightly longer.
Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models
NASA Technical Reports Server (NTRS)
Mjoisness, Eric; Castano, Rebecca; Gray, Alexander
1999-01-01
We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
Random search optimization based on genetic algorithm and discriminant function
NASA Technical Reports Server (NTRS)
Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.
1990-01-01
The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.
2010-11-01
Novembre 2010. Contexte: La puissance des ordinateurs nous permet aujourd’hui d’étudier des problèmes pour lesquels une solution analytique n’existe... 13 4.8 Proof of Corollary........................................................................................................ 13 ...optimal capacities for links. e DRDC CORA TM 2010-249 13 4.9 Example Figure 4 below shows that the probability of achieving the optimal
Convex Optimization over Classes of Multiparticle Entanglement
NASA Astrophysics Data System (ADS)
Shang, Jiangwei; Gühne, Otfried
2018-02-01
A well-known strategy to characterize multiparticle entanglement utilizes the notion of stochastic local operations and classical communication (SLOCC), but characterizing the resulting entanglement classes is difficult. Given a multiparticle quantum state, we first show that Gilbert's algorithm can be adapted to prove separability or membership in a certain entanglement class. We then present two algorithms for convex optimization over SLOCC classes. The first algorithm uses a simple gradient approach, while the other one employs the accelerated projected-gradient method. For demonstration, the algorithms are applied to the likelihood-ratio test using experimental data on bound entanglement of a noisy four-photon Smolin state [Phys. Rev. Lett. 105, 130501 (2010), 10.1103/PhysRevLett.105.130501].
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
Optimal Alignment of Structures for Finite and Periodic Systems.
Griffiths, Matthew; Niblett, Samuel P; Wales, David J
2017-10-10
Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.
Efficient prediction designs for random fields.
Müller, Werner G; Pronzato, Luc; Rendas, Joao; Waldl, Helmut
2015-03-01
For estimation and predictions of random fields, it is increasingly acknowledged that the kriging variance may be a poor representative of true uncertainty. Experimental designs based on more elaborate criteria that are appropriate for empirical kriging (EK) are then often non-space-filling and very costly to determine. In this paper, we investigate the possibility of using a compound criterion inspired by an equivalence theorem type relation to build designs quasi-optimal for the EK variance when space-filling designs become unsuitable. Two algorithms are proposed, one relying on stochastic optimization to explicitly identify the Pareto front, whereas the second uses the surrogate criteria as local heuristic to choose the points at which the (costly) true EK variance is effectively computed. We illustrate the performance of the algorithms presented on both a simple simulated example and a real oceanographic dataset. © 2014 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob A.
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASA's Dawn mission. The Dawn trajectory was designed with the DDP-based Static/Dynamic Optimal Control algorithm used in the Mystic software.1 Another recently developed method, Hybrid Differential Dynamic Programming (HDDP),2, 3 is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
López-Caraballo, C. H.; Lazzús, J. A.; Salfate, I.; Rojas, P.; Rivera, M.; Palma-Chilla, L.
2015-01-01
An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass chaotic time series in the short-term x(t + 6). The performance prediction was evaluated and compared with other studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute the uncertainties of predictions for noisy Mackey-Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σ N) from 0.01 to 0.1. PMID:26351449
López-Caraballo, C H; Lazzús, J A; Salfate, I; Rojas, P; Rivera, M; Palma-Chilla, L
2015-01-01
An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass chaotic time series in the short-term x(t + 6). The performance prediction was evaluated and compared with other studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute the uncertainties of predictions for noisy Mackey-Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σ(N)) from 0.01 to 0.1.
An implicit iterative algorithm with a tuning parameter for Itô Lyapunov matrix equations
NASA Astrophysics Data System (ADS)
Zhang, Ying; Wu, Ai-Guo; Sun, Hui-Jie
2018-01-01
In this paper, an implicit iterative algorithm is proposed for solving a class of Lyapunov matrix equations arising in Itô stochastic linear systems. A tuning parameter is introduced in this algorithm, and thus the convergence rate of the algorithm can be changed. Some conditions are presented such that the developed algorithm is convergent. In addition, an explicit expression is also derived for the optimal tuning parameter, which guarantees that the obtained algorithm achieves its fastest convergence rate. Finally, numerical examples are employed to illustrate the effectiveness of the given algorithm.
Planning with Continuous Resources in Stochastic Domains
NASA Technical Reports Server (NTRS)
Mausam, Mausau; Benazera, Emmanuel; Brafman, Roneu; Hansen, Eric
2005-01-01
We consider the problem of optimal planning in stochastic domains with metric resource constraints. Our goal is to generate a policy whose expected sum of rewards is maximized for a given initial state. We consider a general formulation motivated by our application domain--planetary exploration--in which the choice of an action at each step may depend on the current resource levels. We adapt the forward search algorithm AO* to handle our continuous state space efficiently.
Optimization in optical systems revisited: Beyond genetic algorithms
NASA Astrophysics Data System (ADS)
Gagnon, Denis; Dumont, Joey; Dubé, Louis
2013-05-01
Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique
Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep
2015-01-01
In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032
Optimal control of hydroelectric facilities
NASA Astrophysics Data System (ADS)
Zhao, Guangzhi
This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the challenging problem of optimizing a sequence of two hydro dams sharing the same river system. The complexity of this problem is magnified and we just scratch its surface here. The thesis concludes with suggestions for future work in this fertile area. Keywords: dynamic programming, hydroelectric facility, optimization, optimal control, switching cost, turbine efficiency.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
NASA Astrophysics Data System (ADS)
Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian
2017-08-01
In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.
Deng, Haishan; Shang, Erxin; Xiang, Bingren; Xie, Shaofei; Tang, Yuping; Duan, Jin-ao; Zhan, Ying; Chi, Yumei; Tan, Defei
2011-03-15
The stochastic resonance algorithm (SRA) has been developed as a potential tool for amplifying and determining weak chromatographic peaks in recent years. However, the conventional SRA cannot be applied directly to ultra-performance liquid chromatography/time-of-flight mass spectrometry (UPLC/TOFMS). The obstacle lies in the fact that the narrow peaks generated by UPLC contain high-frequency components which fall beyond the restrictions of the theory of stochastic resonance. Although there already exists an algorithm that allows a high-frequency weak signal to be detected, the sampling frequency of TOFMS is not fast enough to meet the requirement of the algorithm. Another problem is the depression of the weak peak of the compound with low concentration or weak detection response, which prevents the simultaneous determination of multi-component UPLC/TOFMS peaks. In order to lower the frequencies of the peaks, an interpolation and re-scaling frequency stochastic resonance (IRSR) is proposed, which re-scales the peak frequencies via linear interpolating sample points numerically. The re-scaled UPLC/TOFMS peaks could then be amplified significantly. By introducing an external energy field upon the UPLC/TOFMS signals, the method of energy gain was developed to simultaneously amplify and determine weak peaks from multi-components. Subsequently, a multi-component stochastic resonance algorithm was constructed for the simultaneous quantitative determination of multiple weak UPLC/TOFMS peaks based on the two methods. The optimization of parameters was discussed in detail with simulated data sets, and the applicability of the algorithm was evaluated by quantitative analysis of three alkaloids in human plasma using UPLC/TOFMS. The new algorithm behaved well in the improvement of signal-to-noise (S/N) compared to several normally used peak enhancement methods, including the Savitzky-Golay filter, Whittaker-Eilers smoother and matched filtration. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Roslund, Jonathan; Shir, Ofer M.; Bäck, Thomas; Rabitz, Herschel
2009-10-01
Optimization of quantum systems by closed-loop adaptive pulse shaping offers a rich domain for the development and application of specialized evolutionary algorithms. Derandomized evolution strategies (DESs) are presented here as a robust class of optimizers for experimental quantum control. The combination of stochastic and quasi-local search embodied by these algorithms is especially amenable to the inherent topology of quantum control landscapes. Implementation of DES in the laboratory results in efficiency gains of up to ˜9 times that of the standard genetic algorithm, and thus is a promising tool for optimization of unstable or fragile systems. The statistical learning upon which these algorithms are predicated also provide the means for obtaining a control problem’s Hessian matrix with no additional experimental overhead. The forced optimal covariance adaptive learning (FOCAL) method is introduced to enable retrieval of the Hessian matrix, which can reveal information about the landscape’s local structure and dynamic mechanism. Exploitation of such algorithms in quantum control experiments should enhance their efficiency and provide additional fundamental insights.
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1987-01-01
A combined stochastic feedforward and feedback control design methodology was developed. The objective of the feedforward control law is to track the commanded trajectory, whereas the feedback control law tries to maintain the plant state near the desired trajectory in the presence of disturbances and uncertainties about the plant. The feedforward control law design is formulated as a stochastic optimization problem and is embedded into the stochastic output feedback problem where the plant contains unstable and uncontrollable modes. An algorithm to compute the optimal feedforward is developed. In this approach, the use of error integral feedback, dynamic compensation, control rate command structures are an integral part of the methodology. An incremental implementation is recommended. Results on the eigenvalues of the implemented versus designed control laws are presented. The stochastic feedforward/feedback control methodology is used to design a digital automatic landing system for the ATOPS Research Vehicle, a Boeing 737-100 aircraft. The system control modes include localizer and glideslope capture and track, and flare to touchdown. Results of a detailed nonlinear simulation of the digital control laws, actuator systems, and aircraft aerodynamics are presented.
Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations.
Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn
2015-05-15
The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves. Copyright © 2015 Elsevier B.V. All rights reserved.
Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.
Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir
2018-04-01
In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.
Online learning in optical tomography: a stochastic approach
NASA Astrophysics Data System (ADS)
Chen, Ke; Li, Qin; Liu, Jian-Guo
2018-07-01
We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.
Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2017-01-01
Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
'Extremotaxis': computing with a bacterial-inspired algorithm.
Nicolau, Dan V; Burrage, Kevin; Nicolau, Dan V; Maini, Philip K
2008-01-01
We present a general-purpose optimization algorithm inspired by "run-and-tumble", the biased random walk chemotactic swimming strategy used by the bacterium Escherichia coli to locate regions of high nutrient concentration The method uses particles (corresponding to bacteria) that swim through the variable space (corresponding to the attractant concentration profile). By constantly performing temporal comparisons, the particles drift towards the minimum or maximum of the function of interest. We illustrate the use of our method with four examples. We also present a discrete version of the algorithm. The new algorithm is expected to be useful in combinatorial optimization problems involving many variables, where the functional landscape is apparently stochastic and has local minima, but preserves some derivative structure at intermediate scales.
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis
NASA Astrophysics Data System (ADS)
Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.
2014-04-01
A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.
NASA Astrophysics Data System (ADS)
Yang, Wen; Fung, Richard Y. K.
2014-06-01
This article considers an order acceptance problem in a make-to-stock manufacturing system with multiple demand classes in a finite time horizon. Demands in different periods are random variables and are independent of one another, and replenishments of inventory deviate from the scheduled quantities. The objective of this work is to maximize the expected net profit over the planning horizon by deciding the fraction of the demand that is going to be fulfilled. This article presents a stochastic order acceptance optimization model and analyses the existence of the optimal promising policies. An example of a discrete problem is used to illustrate the policies by applying the dynamic programming method. In order to solve the continuous problems, a heuristic algorithm based on stochastic approximation (HASA) is developed. Finally, the computational results of a case example illustrate the effectiveness and efficiency of the HASA approach, and make the application of the proposed model readily acceptable.
A supplier selection and order allocation problem with stochastic demands
NASA Astrophysics Data System (ADS)
Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua
2011-08-01
We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.
Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.
Multicriteria approaches for a private equity fund
NASA Astrophysics Data System (ADS)
Tammer, Christiane; Tannert, Johannes
2012-09-01
We develop a new model for a Private Equity Fund based on stochastic differential equations. In order to find efficient strategies for the fund manager we formulate a multicriteria optimization problem for a Private Equity Fund. Using the e-constraint method we solve this multicriteria optimization problem. Furthermore, a genetic algorithm is applied in order to get an approximation of the efficient frontier.
Algorithms for optimizing cross-overs in DNA shuffling.
He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris
2012-03-21
DNA shuffling generates combinatorial libraries of chimeric genes by stochastically recombining parent genes. The resulting libraries are subjected to large-scale genetic selection or screening to identify those chimeras with favorable properties (e.g., enhanced stability or enzymatic activity). While DNA shuffling has been applied quite successfully, it is limited by its homology-dependent, stochastic nature. Consequently, it is used only with parents of sufficient overall sequence identity, and provides no control over the resulting chimeric library. This paper presents efficient methods to extend the scope of DNA shuffling to handle significantly more diverse parents and to generate more predictable, optimized libraries. Our CODNS (cross-over optimization for DNA shuffling) approach employs polynomial-time dynamic programming algorithms to select codons for the parental amino acids, allowing for zero or a fixed number of conservative substitutions. We first present efficient algorithms to optimize the local sequence identity or the nearest-neighbor approximation of the change in free energy upon annealing, objectives that were previously optimized by computationally-expensive integer programming methods. We then present efficient algorithms for more powerful objectives that seek to localize and enhance the frequency of recombination by producing "runs" of common nucleotides either overall or according to the sequence diversity of the resulting chimeras. We demonstrate the effectiveness of CODNS in choosing codons and allocating substitutions to promote recombination between parents targeted in earlier studies: two GAR transformylases (41% amino acid sequence identity), two very distantly related DNA polymerases, Pol X and β (15%), and beta-lactamases of varying identity (26-47%). Our methods provide the protein engineer with a new approach to DNA shuffling that supports substantially more diverse parents, is more deterministic, and generates more predictable and more diverse chimeric libraries.
Stochastic simulation and analysis of biomolecular reaction networks
Frazier, John M; Chushak, Yaroslav; Foy, Brent
2009-01-01
Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Extremal Optimization: Methods Derived from Co-Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less
Solving a Class of Stochastic Mixed-Integer Programs With Branch and Price
2006-01-01
a two-dimensional knapsack problem, but for a given m, the objective value gi does not depend on the variance index v. This will be used in a final...optimization. Journal of Multicriteria Decision Analysis 11, 139–150 (2002) 29. Ford, L.R., Fulkerson, D.R.: A suggested computation for the maximal...for solution by a branch-and-price algorithm (B&P). We then survey a number of examples, and use a stochastic facility-location problem (SFLP) for a
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1990-01-01
A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
Mesh Denoising based on Normal Voting Tensor and Binary Optimization.
Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad
2017-08-17
This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.
DOT National Transportation Integrated Search
2010-10-25
Real-time information is important for travelers' routing decisions in uncertain networks by enabling online adaptation to revealed traffic conditions. Usually there are spatial and/or temporal limitations in traveler information. In this research, a...
Designing robust control laws using genetic algorithms
NASA Technical Reports Server (NTRS)
Marrison, Chris
1994-01-01
The purpose of this research is to create a method of finding practical, robust control laws. The robustness of a controller is judged by Stochastic Robustness metrics and the level of robustness is optimized by searching for design parameters that minimize a robustness cost function.
DOT National Transportation Integrated Search
2013-09-01
Decreasing the number of accidents at highway-railway grade crossings (HRGCs) is an important goal in the transportation field. The preemption of traffic signal operations at HRGCs is widely used to prevent accidents by clearing vehicles off the trac...
Efficient boundary hunting via vector quantization
NASA Astrophysics Data System (ADS)
Diamantini, Claudia; Panti, Maurizio
2001-03-01
A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.
Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D
2002-01-01
This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.
NASA Astrophysics Data System (ADS)
Cody, Brent M.; Baù, Domenico; González-Nicolás, Ana
2015-09-01
Geological carbon sequestration (GCS) has been identified as having the potential to reduce increasing atmospheric concentrations of carbon dioxide (CO2). However, a global impact will only be achieved if GCS is cost-effectively and safely implemented on a massive scale. This work presents a computationally efficient methodology for identifying optimal injection strategies at candidate GCS sites having uncertainty associated with caprock permeability, effective compressibility, and aquifer permeability. A multi-objective evolutionary optimization algorithm is used to heuristically determine non-dominated solutions between the following two competing objectives: (1) maximize mass of CO2 sequestered and (2) minimize project cost. A semi-analytical algorithm is used to estimate CO2 leakage mass rather than a numerical model, enabling the study of GCS sites having vastly different domain characteristics. The stochastic optimization framework presented herein is applied to a feasibility study of GCS in a brine aquifer in the Michigan Basin (MB), USA. Eight optimization test cases are performed to investigate the impact of decision-maker (DM) preferences on Pareto-optimal objective-function values and carbon-injection strategies. This analysis shows that the feasibility of GCS at the MB test site is highly dependent upon the DM's risk-adversity preference and degree of uncertainty associated with caprock integrity. Finally, large gains in computational efficiency achieved using parallel processing and archiving are discussed.
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).
NASA Technical Reports Server (NTRS)
Sandell, N. R., Jr.; Athans, M.
1975-01-01
The development of the theory of the finite - state, finite - memory (FSFM) stochastic control problem is discussed. The sufficiency of the FSFM minimum principle (which is in general only a necessary condition) was investigated. By introducing the notion of a signaling strategy as defined in the literature on games, conditions under which the FSFM minimum principle is sufficient were determined. This result explicitly interconnects the information structure of the FSFM problem with its optimality conditions. The min-H algorithm for the FSFM problem was studied. It is demonstrated that a version of the algorithm always converges to a particular type of local minimum termed a person - by - person extremal.
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
A fuzzy reinforcement learning approach to power control in wireless transmitters.
Vengerov, David; Bambos, Nicholas; Berenji, Hamid R
2005-08-01
We address the issue of power-controlled shared channel access in wireless networks supporting packetized data traffic. We formulate this problem using the dynamic programming framework and present a new distributed fuzzy reinforcement learning algorithm (ACFRL-2) capable of adequately solving a class of problems to which the power control problem belongs. Our experimental results show that the algorithm converges almost deterministically to a neighborhood of optimal parameter values, as opposed to a very noisy stochastic convergence of earlier algorithms. The main tradeoff facing a transmitter is to balance its current power level with future backlog in the presence of stochastically changing interference. Simulation experiments demonstrate that the ACFRL-2 algorithm achieves significant performance gains over the standard power control approach used in CDMA2000. Such a large improvement is explained by the fact that ACFRL-2 allows transmitters to learn implicit coordination policies, which back off under stressful channel conditions as opposed to engaging in escalating "power wars."
Siddiqui, Hasib; Bouman, Charles A
2007-03-01
Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya Shankar; McCulloch, Richard Chet James
In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes
Williams, B.K.
1988-01-01
Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.
NASA Astrophysics Data System (ADS)
Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu
2015-08-01
Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.
Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain
NASA Astrophysics Data System (ADS)
Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida
2013-04-01
Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.
Orio, Patricio; Soudry, Daniel
2012-01-01
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin
Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.
Coupled stochastic soil moisture simulation-optimization model of deficit irrigation
NASA Astrophysics Data System (ADS)
Alizadeh, Hosein; Mousavi, S. Jamshid
2013-07-01
This study presents an explicit stochastic optimization-simulation model of short-term deficit irrigation management for large-scale irrigation districts. The model which is a nonlinear nonconvex program with an economic objective function is built on an agrohydrological simulation component. The simulation component integrates (1) an explicit stochastic model of soil moisture dynamics of the crop-root zone considering interaction of stochastic rainfall and irrigation with shallow water table effects, (2) a conceptual root zone salt balance model, and 3) the FAO crop yield model. Particle Swarm Optimization algorithm, linked to the simulation component, solves the resulting nonconvex program with a significantly better computational performance compared to a Monte Carlo-based implicit stochastic optimization model. The model has been tested first by applying it in single-crop irrigation problems through which the effects of the severity of water deficit on the objective function (net benefit), root-zone water balance, and irrigation water needs have been assessed. Then, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. While the maximum net benefit has been obtained for a stress-avoidance (SA) irrigation policy, the highest water profitability has been resulted when only about 60% of the water used in the SA policy is applied. The DAID with respectively 33% of total cultivated area and 37% of total applied water has produced only 14% of the total net benefit due to low-valued crops and adverse soil and shallow water table conditions.
NASA Astrophysics Data System (ADS)
Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A. Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah
2017-04-01
This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele’s (ZDT’s) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin
2017-01-01
This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.
Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas
2009-01-01
Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170
Global optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Arora, Jasbir S.
1990-01-01
The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.
Application of IFT and SPSA to servo system control.
Rădac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M; Preitl, Stefan
2011-12-01
This paper treats the application of two data-based model-free gradient-based stochastic optimization techniques, i.e., iterative feedback tuning (IFT) and simultaneous perturbation stochastic approximation (SPSA), to servo system control. The representative case of controlled processes modeled by second-order systems with an integral component is discussed. New IFT and SPSA algorithms are suggested to tune the parameters of the state feedback controllers with an integrator in the linear-quadratic-Gaussian (LQG) problem formulation. An implementation case study concerning the LQG-based design of an angular position controller for a direct current servo system laboratory equipment is included to highlight the pros and cons of IFT and SPSA from an application's point of view. The comparison of IFT and SPSA algorithms is focused on an insight into their implementation.
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Multi-period project portfolio selection under risk considerations and stochastic income
NASA Astrophysics Data System (ADS)
Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood
2018-02-01
This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.
Time-ordered product expansions for computational stochastic system biology.
Mjolsness, Eric
2013-06-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Bioinspired Concepts: Unified Theory for Complex Biological and Engineering Systems
2006-01-01
i.e., data flows of finite size arrive at the system randomly. For such a system , we propose a modified dual scheduling algorithm that stabilizes ...demon. We compute the efficiency of the controller over finite and infinite time intervals, and since the controller is optimal, this yields hard limits...and highly optimized tolerance. PNAS, 102, 2005. 51. G. N. Nair and R. J. Evans. Stabilizability of stochastic linear systems with finite feedback
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Stochastic DG Placement for Conservation Voltage Reduction Based on Multiple Replications Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
2015-06-01
Conservation voltage reduction (CVR) and distributed-generation (DG) integration are popular strategies implemented by utilities to improve energy efficiency. This paper investigates the interactions between CVR and DG placement to minimize load consumption in distribution networks, while keeping the lowest voltage level within the predefined range. The optimal placement of DG units is formulated as a stochastic optimization problem considering the uncertainty of DG outputs and load consumptions. A sample average approximation algorithm-based technique is developed to solve the formulated problem effectively. A multiple replications procedure is developed to test the stability of the solution and calculate the confidence interval ofmore » the gap between the candidate solution and optimal solution. The proposed method has been applied to the IEEE 37-bus distribution test system with different scenarios. The numerical results indicate that the implementations of CVR and DG, if combined, can achieve significant energy savings.« less
Optimum Parameters of a Tuned Liquid Column Damper in a Wind Turbine Subject to Stochastic Load
NASA Astrophysics Data System (ADS)
Alkmim, M. H.; de Morais, M. V. G.; Fabro, A. T.
2017-12-01
Parameter optimization for tuned liquid column dampers (TLCD), a class of passive structural control, have been previously proposed in the literature for reducing vibration in wind turbines, and several other applications. However, most of the available work consider the wind excitation as either a deterministic harmonic load or random load with white noise spectra. In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of undamped primary system under white noise excitation by comparing with result from the literature. Finally, it is shown that different wind profiles can significantly affect the optimum TLCD parameters.
Fuel management optimization using genetic algorithms and code independence
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1994-12-31
Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed. PMID:27926946
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva
2017-03-01
In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zatarain-Salazar, J.; Reed, P. M.; Herman, J. D.; Giuliani, M.; Castelletti, A.
2014-12-01
Globally reservoir operations provide fundamental services to water supply, energy generation, recreation, and ecosystems. The pressures of expanding populations, climate change, and increased energy demands are motivating a significant investment in re-operationalizing existing reservoirs or defining operations for new reservoirs. Recent work has highlighted the potential benefits of exploiting recent advances in many-objective optimization and direct policy search (DPS) to aid in addressing these systems' multi-sector demand tradeoffs. This study contributes to a comprehensive diagnostic assessment of multi-objective evolutionary optimization algorithms (MOEAs) efficiency, effectiveness, reliability, and controllability when supporting DPS for the Conowingo dam in the Lower Susquehanna River Basin. The Lower Susquehanna River is an interstate water body that has been subject to intensive water management efforts due to the system's competing demands from urban water supply, atomic power plant cooling, hydropower production, and federally regulated environmental flows. Seven benchmark and state-of-the-art MOEAs are tested on deterministic and stochastic instances of the Susquehanna test case. In the deterministic formulation, the operating objectives are evaluated over the historical realization of the hydroclimatic variables (i.e., inflows and evaporation rates). In the stochastic formulation, the same objectives are instead evaluated over an ensemble of stochastic inflows and evaporation rates realizations. The algorithms are evaluated in their ability to support DPS in discovering reservoir operations that compose the tradeoffs for six multi-sector performance objectives with thirty-two decision variables. Our diagnostic results highlight that many-objective DPS is very challenging for modern MOEAs and that epsilon dominance is critical for attaining high levels of performance. Epsilon dominance algorithms epsilon-MOEA, epsilon-NSGAII and the auto adaptive Borg MOEA, are statistically superior for the six-objective Susquehanna instance of this important class of problems. Additionally, shifting from deterministic history-based DPS to stochastic DPS significantly increases the difficulty of the problem.
Essays on variational approximation techniques for stochastic optimization problems
NASA Astrophysics Data System (ADS)
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.
NASA Astrophysics Data System (ADS)
Jia, Ningning; Y Lam, Edmund
2010-04-01
Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.
Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent
De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle
2018-01-01
Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770
Sensor management in RADAR/IRST track fusion
NASA Astrophysics Data System (ADS)
Hu, Shi-qiang; Jing, Zhong-liang
2004-07-01
In this paper, a novel radar management strategy technique suitable for RADAR/IRST track fusion, which is based on Fisher Information Matrix (FIM) and fuzzy stochastic decision approach, is put forward. Firstly, optimal radar measurements' scheduling is obtained by the method of maximizing determinant of the Fisher information matrix of radar and IRST measurements, which is managed by the expert system. Then, suggested a "pseudo sensor" to predict the possible target position using the polynomial method based on the radar and IRST measurements, using "pseudo sensor" model to estimate the target position even if the radar is turned off. At last, based on the tracking performance and the state of target maneuver, fuzzy stochastic decision is used to adjust the optimal radar scheduling and retrieve the module parameter of "pseudo sensor". The experiment result indicates that the algorithm can not only limit Radar activity effectively but also keep the tracking accuracy of active/passive system well. And this algorithm eliminates the drawback of traditional Radar management methods that the Radar activity is fixed and not easy to control and protect.
Robust Algorithms for Detecting a Change in a Stochastic Process with Infinite Memory
1988-03-01
breakdown point and the additional assumption of 0-mixing on the nominal meas- influence function . The structure of the optimal algorithm ures. Then Huber’s...are i.i.d. sequences of Gaus- For the breakdown point and the influence function sian random variables, with identical variance o2 . Let we will use...algebraic sign for i=0,1. Here z will be chosen such = f nthat it leads to worst case or earliest breakdown. i (14) Next, the influence function measures
A hyperbolastic type-I diffusion process: Parameter estimation by means of the firefly algorithm.
Barrera, Antonio; Román-Román, Patricia; Torres-Ruiz, Francisco
2018-01-01
A stochastic diffusion process, whose mean function is a hyperbolastic curve of type I, is presented. The main characteristics of the process are studied and the problem of maximum likelihood estimation for the parameters of the process is considered. To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stagewise procedure. Some examples based on simulated sample paths and real data illustrate this development. Copyright © 2017 Elsevier B.V. All rights reserved.
Fully probabilistic control design in an adaptive critic framework.
Herzallah, Randa; Kárný, Miroslav
2011-12-01
Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Bing; Ren, De-Qing; Zhang, Xi
2011-08-01
An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10-3 to 10-4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan
This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
NASA Astrophysics Data System (ADS)
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
NASA Astrophysics Data System (ADS)
Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.
2014-06-01
This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.
Helaers, Raphaël; Milinkovitch, Michel C
2010-07-15
The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2.0 gives access both to high customization for the phylogeneticist, as well as to an ergonomic interface and functionalities assisting the non-specialist for sound inference of large phylogenetic trees using nucleotide sequences. MetaPIGA v2.0 and its extensive user-manual are freely available to academics at http://www.metapiga.org.
2010-01-01
Background The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Results Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. Conclusions The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2.0 gives access both to high customization for the phylogeneticist, as well as to an ergonomic interface and functionalities assisting the non-specialist for sound inference of large phylogenetic trees using nucleotide sequences. MetaPIGA v2.0 and its extensive user-manual are freely available to academics at http://www.metapiga.org. PMID:20633263
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2016-04-01
This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).
NASA Astrophysics Data System (ADS)
Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi
2017-09-01
Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.
2011-09-01
artificially creating enough baseline to enable triangulation. This motion comes at the expense of the primary mission, unless the entire purpose...control of a sUAS for surveillance and other mis-sions. Completely autonomous UAS control for surveillance missions is still an on-the-horizon...work, xapp, was correspondingly set to 2-m. Since the test platform for the algorithm was a helicopter vice a fixed-wing UAS , an aggressive flare segment
Developing a new stochastic competitive model regarding inventory and price
NASA Astrophysics Data System (ADS)
Rashid, Reza; Bozorgi-Amiri, Ali; Seyedhoseini, S. M.
2015-09-01
Within the competition in today's business environment, the design of supply chains becomes more complex than before. This paper deals with the retailer's location problem when customers choose their vendors, and inventory costs have been considered for retailers. In a competitive location problem, price and location of facilities affect demands of customers; consequently, simultaneous optimization of the location and inventory system is needed. To prepare a realistic model, demand and lead time have been assumed as stochastic parameters, and queuing theory has been used to develop a comprehensive mathematical model. Due to complexity of the problem, a branch and bound algorithm has been developed, and its performance has been validated in several numerical examples, which indicated effectiveness of the algorithm. Also, a real case has been prepared to demonstrate performance of the model for real world.
Uncertainty Reduction for Stochastic Processes on Complex Networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo; Castellano, Claudio
2018-05-01
Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.
Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.
2016-11-27
Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.
A Stochastic Total Least Squares Solution of Adaptive Filtering Problem
Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412
Stochastic optimization of broadband reflecting photonic structures.
Estrada-Wiese, D; Del Río-Chanona, E A; Del Río, J A
2018-01-19
Photonic crystals (PCs) are built to control the propagation of light within their structure. These can be used for an assortment of applications where custom designed devices are of interest. Among them, one-dimensional PCs can be produced to achieve the reflection of specific and broad wavelength ranges. However, their design and fabrication are challenging due to the diversity of periodic arrangement and layer configuration that each different PC needs. In this study, we present a framework to design high reflecting PCs for any desired wavelength range. Our method combines three stochastic optimization algorithms (Random Search, Particle Swarm Optimization and Simulated Annealing) along with a reduced space-search methodology to obtain a custom and optimized PC configuration. The optimization procedure is evaluated through theoretical reflectance spectra calculated by using the Equispaced Thickness Method, which improves the simulations due to the consideration of incoherent light transmission. We prove the viability of our procedure by fabricating different reflecting PCs made of porous silicon and obtain good agreement between experiment and theory using a merit function. With this methodology, diverse reflecting PCs can be designed for any applications and fabricated with different materials.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-12-12
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
AESS: Accelerated Exact Stochastic Simulation
NASA Astrophysics Data System (ADS)
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution method: The Accelerated Exact Stochastic Simulation (AESS) tool provides implementations of a wide variety of popular variations on the Gillespie method. Users can select the specific algorithm considered most appropriate. Comparisons between the methods and with other available implementations indicate that AESS provides the fastest known implementation of Gillespie's method for a variety of test models. Users may wish to execute ensembles of simulations to sweep parameters or to obtain better statistical results, so AESS supports acceleration of ensembles of simulation using parallel processing with MPI, SSE vector units on x86 processors, and/or using NVIDIA GPUs with CUDA.
EXTENDING THE REALM OF OPTIMIZATION FOR COMPLEX SYSTEMS: UNCERTAINTY, COMPETITION, AND DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shanbhag, Uday V; Basar, Tamer; Meyn, Sean
Research reported addressed these topics: the development of analytical and algorithmic tools for distributed computation of Nash equilibria; synchronization in mean-field oscillator games, with an emphasis on learning and efficiency analysis; questions that combine learning and computation; questions including stochastic and mean-field games; modeling and control in the context of power markets.
Optimal sensor placement for spatial lattice structure based on genetic algorithms
NASA Astrophysics Data System (ADS)
Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian
2008-10-01
Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.
Using Ant Colony Optimization for Routing in VLSI Chips
NASA Astrophysics Data System (ADS)
Arora, Tamanna; Moses, Melanie
2009-04-01
Rapid advances in VLSI technology have increased the number of transistors that fit on a single chip to about two billion. A frequent problem in the design of such high performance and high density VLSI layouts is that of routing wires that connect such large numbers of components. Most wire-routing problems are computationally hard. The quality of any routing algorithm is judged by the extent to which it satisfies routing constraints and design objectives. Some of the broader design objectives include minimizing total routed wire length, and minimizing total capacitance induced in the chip, both of which serve to minimize power consumed by the chip. Ant Colony Optimization algorithms (ACO) provide a multi-agent framework for combinatorial optimization by combining memory, stochastic decision and strategies of collective and distributed learning by ant-like agents. This paper applies ACO to the NP-hard problem of finding optimal routes for interconnect routing on VLSI chips. The constraints on interconnect routing are used by ants as heuristics which guide their search process. We found that ACO algorithms were able to successfully incorporate multiple constraints and route interconnects on suite of benchmark chips. On an average, the algorithm routed with total wire length 5.5% less than other established routing algorithms.
Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application
NASA Astrophysics Data System (ADS)
Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin
2007-12-01
We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.
A model of interaction between anticorruption authority and corruption groups
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neverova, Elena G.; Malafeyef, Oleg A.
The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game.
On the effect of response transformations in sequential parameter optimization.
Wagner, Tobias; Wessing, Simon
2012-01-01
Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem.
NASA Technical Reports Server (NTRS)
Mehra, R. K.; Rouhani, R.; Jones, S.; Schick, I.
1980-01-01
A model to assess the value of improved information regarding the inventories, productions, exports, and imports of crop on a worldwide basis is discussed. A previously proposed model is interpreted in a stochastic control setting and the underlying assumptions of the model are revealed. In solving the stochastic optimization problem, the Markov programming approach is much more powerful and exact as compared to the dynamic programming-simulation approach of the original model. The convergence of a dual variable Markov programming algorithm is shown to be fast and efficient. A computer program for the general model of multicountry-multiperiod is developed. As an example, the case of one country-two periods is treated and the results are presented in detail. A comparison with the original model results reveals certain interesting aspects of the algorithms and the dependence of the value of information on the incremental cost function.
Applying a Genetic Algorithm to Reconfigurable Hardware
NASA Technical Reports Server (NTRS)
Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim
2004-01-01
This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.
Conditioning of Model Identification Task in Immune Inspired Optimizer SILO
NASA Astrophysics Data System (ADS)
Wojdan, K.; Swirski, K.; Warchol, M.; Maciorowski, M.
2009-10-01
Methods which provide good conditioning of model identification task in immune inspired, steady-state controller SILO (Stochastic Immune Layer Optimizer) are presented in this paper. These methods are implemented in a model based optimization algorithm. The first method uses a safe model to assure that gains of the process's model can be estimated. The second method is responsible for elimination of potential linear dependences between columns of observation matrix. Moreover new results from one of SILO implementation in polish power plant are presented. They confirm high efficiency of the presented solution in solving technical problems.
Approximate dynamic programming for optimal stationary control with control-dependent noise.
Jiang, Yu; Jiang, Zhong-Ping
2011-12-01
This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.
Fast stochastic algorithm for simulating evolutionary population dynamics
NASA Astrophysics Data System (ADS)
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
Stochastic fluctuations and the detectability limit of network communities.
Floretta, Lucio; Liechti, Jonas; Flammini, Alessandro; De Los Rios, Paolo
2013-12-01
We have analyzed the detectability limits of network communities in the framework of the popular Girvan and Newman benchmark. By carefully taking into account the inevitable stochastic fluctuations that affect the construction of each and every instance of the benchmark, we come to the conclusion that the native, putative partition of the network is completely lost even before the in-degree/out-degree ratio becomes equal to that of a structureless Erdös-Rényi network. We develop a simple iterative scheme, analytically well described by an infinite branching process, to provide an estimate of the true detectability limit. Using various algorithms based on modularity optimization, we show that all of them behave (semiquantitatively) in the same way, with the same functional form of the detectability threshold as a function of the network parameters. Because the same behavior has also been found by further modularity-optimization methods and for methods based on different heuristics implementations, we conclude that indeed a correct definition of the detectability limit must take into account the stochastic fluctuations of the network construction.
NASA Astrophysics Data System (ADS)
Hasegawa, Manabu; Hiramatsu, Kotaro
2013-10-01
The effectiveness of the Metropolis algorithm (MA) (constant-temperature simulated annealing) in optimization by the method of search-space smoothing (SSS) (potential smoothing) is studied on two types of random traveling salesman problems. The optimization mechanism of this hybrid approach (MASSS) is investigated by analyzing the exploration dynamics observed in the rugged landscape of the cost function (energy surface). The results show that the MA can be successfully utilized as a local search algorithm in the SSS approach. It is also clarified that the optimization characteristics of these two constituent methods are improved in a mutually beneficial manner in the MASSS run. Specifically, the relaxation dynamics generated by employing the MA work effectively even in a smoothed landscape and more advantage is taken of the guiding function proposed in the idea of SSS; this mechanism operates in an adaptive manner in the de-smoothing process and therefore the MASSS method maintains its optimization function over a wider temperature range than the MA.
NASA Astrophysics Data System (ADS)
Gilani, Seyed-Omid; Sattarvand, Javad
2016-02-01
Meeting production targets in terms of ore quantity and quality is critical for a successful mining operation. In-situ grade uncertainty causes both deviations from production targets and general financial deficits. A new stochastic optimization algorithm based on ant colony optimization (ACO) approach is developed herein to integrate geological uncertainty described through a series of the simulated ore bodies. Two different strategies were developed based on a single predefined probability value (Prob) and multiple probability values (Pro bnt) , respectively in order to improve the initial solutions that created by deterministic ACO procedure. Application at the Sungun copper mine in the northwest of Iran demonstrate the abilities of the stochastic approach to create a single schedule and control the risk of deviating from production targets over time and also increase the project value. A comparison between two strategies and traditional approach illustrates that the multiple probability strategy is able to produce better schedules, however, the single predefined probability is more practical in projects requiring of high flexibility degree.
Stochastic reaction-diffusion algorithms for macromolecular crowding
NASA Astrophysics Data System (ADS)
Sturrock, Marc
2016-06-01
Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.
Computationally efficient stochastic optimization using multiple realizations
NASA Astrophysics Data System (ADS)
Bayer, P.; Bürger, C. M.; Finkel, M.
2008-02-01
The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.
Tra, Viet; Kim, Jaeyoung; Kim, Jong-Myon
2017-01-01
This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs) trained via the stochastic diagonal Levenberg-Marquardt (S-DLM) algorithm. The CNNs utilize the spectral energy maps (SEMs) of the acoustic emission (AE) signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds. PMID:29211025
A fast optimization approach for treatment planning of volumetric modulated arc therapy.
Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong
2018-05-30
Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.
Lück, Anja; Klimmasch, Lukas; Großmann, Peter; Germerodt, Sebastian; Kaleta, Christoph
2018-01-10
Organisms need to adapt to changing environments and they do so by using a broad spectrum of strategies. These strategies include finding the right balance between expressing genes before or when they are needed, and adjusting the degree of noise inherent in gene expression. We investigated the interplay between different nutritional environments and the inhabiting organisms' metabolic and genetic adaptations by applying an evolutionary algorithm to an agent-based model of a concise bacterial metabolism. Our results show that constant environments and rapidly fluctuating environments produce similar adaptations in the organisms, making the predictability of the environment a major factor in determining optimal adaptation. We show that exploitation of expression noise occurs only in some types of fluctuating environment and is strongly dependent on the quality and availability of nutrients: stochasticity is generally detrimental in fluctuating environments and beneficial only at equal periods of nutrient availability and above a threshold environmental richness. Moreover, depending on the availability and nutritional value of nutrients, nutrient-dependent and stochastic expression are both strategies used to deal with environmental changes. Overall, we comprehensively characterize the interplay between the quality and periodicity of an environment and the resulting optimal deterministic and stochastic regulation strategies of nutrient-catabolizing pathways.
Simulated maximum likelihood method for estimating kinetic rates in gene expression.
Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin
2007-01-01
Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.
Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.
Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; ...
2015-01-31
Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plansmore » in terms of average delay, number of stops, and vehicular emissions at the network level.« less
Primal-dual techniques for online algorithms and mechanisms
NASA Astrophysics Data System (ADS)
Liaghat, Vahid
An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
Yang, Xin; Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan
2016-01-01
This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems.
Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan
2016-01-01
This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems. PMID:27907163
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
Algorithm refinement for stochastic partial differential equations: II. Correlated systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.
2005-08-10
We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem. PMID:24851085
Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU
NASA Astrophysics Data System (ADS)
Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis
2016-06-01
Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20x to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.
Control system estimation and design for aerospace vehicles with time delay
NASA Technical Reports Server (NTRS)
Allgaier, G. R.; Williams, T. L.
1972-01-01
The problems of estimation and control of discrete, linear, time-varying systems are considered. Previous solutions to these problems involved either approximate techniques, open-loop control solutions, or results which required excessive computation. The estimation problem is solved by two different methods, both of which yield the identical algorithm for determining the optimal filter. The partitioned results achieve a substantial reduction in computation time and storage requirements over the expanded solution, however. The results reduce to the Kalman filter when no delays are present in the system. The control problem is also solved by two different methods, both of which yield identical algorithms for determining the optimal control gains. The stochastic control is shown to be identical to the deterministic control, thus extending the separation principle to time delay systems. The results obtained reduce to the familiar optimal control solution when no time delays are present in the system.
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
NASA Astrophysics Data System (ADS)
Hu, Weifei; Park, Dohyun; Choi, DongHoon
2013-12-01
A composite blade structure for a 2 MW horizontal axis wind turbine is optimally designed. Design requirements are simultaneously minimizing material cost and blade weight while satisfying the constraints on stress ratio, tip deflection, fatigue life and laminate layup requirements. The stress ratio and tip deflection under extreme gust loads and the fatigue life under a stochastic normal wind load are evaluated. A blade element wind load model is proposed to explain the wind pressure difference due to blade height change during rotor rotation. For fatigue life evaluation, the stress result of an implicit nonlinear dynamic analysis under a time-varying fluctuating wind is converted to the histograms of mean and amplitude of maximum stress ratio using the rainflow counting algorithm Miner's rule is employed to predict the fatigue life. After integrating and automating the whole analysis procedure an evolutionary algorithm is used to solve the discrete optimization problem.
Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis
Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20xmore » to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.« less
NASA Astrophysics Data System (ADS)
Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.
2015-03-01
We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.
Solving multistage stochastic programming models of portfolio selection with outstanding liabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, C.
1994-12-31
Models for portfolio selection in the presence of an outstanding liability have received significant attention, for example, models for pricing options. The problem may be described briefly as follows: given a set of risky securities (and a riskless security such as a bond), and given a set of cash flows, i.e., outstanding liability, to be met at some future date, determine an initial portfolio and a dynamic trading strategy for the underlying securities such that the initial cost of the portfolio is within a prescribed wealth level and the expected cash surpluses arising from trading is maximized. While the tradingmore » strategy should be self-financing, there may also be other restrictions such as leverage and short-sale constraints. Usually the treatment is limited to binomial evolution of uncertainty (of stock price), with possible extensions for developing computational bounds for multinomial generalizations. Posing as stochastic programming models of decision making, we investigate alternative efficient solution procedures under continuous evolution of uncertainty, for discrete time economies. We point out an important moment problem arising in the portfolio selection problem, the solution (or bounds) on which provides the basis for developing efficient computational algorithms. While the underlying stochastic program may be computationally tedious even for a modest number of trading opportunities (i.e., time periods), the derived algorithms may used to solve problems whose sizes are beyond those considered within stochastic optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang
2015-12-15
Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. Themore » weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.« less
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
A stochastic framework for spot-scanning particle therapy.
Robini, Marc; Yuemin Zhu; Wanyu Liu; Magnin, Isabelle
2016-08-01
In spot-scanning particle therapy, inverse treatment planning is usually limited to finding the optimal beam fluences given the beam trajectories and energies. We address the much more challenging problem of jointly optimizing the beam fluences, trajectories and energies. For this purpose, we design a simulated annealing algorithm with an exploration mechanism that balances the conflicting demands of a small mixing time at high temperatures and a reasonable acceptance rate at low temperatures. Numerical experiments substantiate the relevance of our approach and open new horizons to spot-scanning particle therapy.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming
NASA Astrophysics Data System (ADS)
Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita
2018-03-01
We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.
Optimal Search Strategy for the Definition of a DNAPL Source
2009-08-01
29. Flow field results for stochastic model (colored contours) and potentiometric map created by hydrogeologist using well water level measurements...potentiometric map created by hydrogeologist using well water level measurements (black contours). 5.1.3. Source search algorithm Figure 30 shows the 15...and C. D. Tankersley, “Forecasting piezometric head levels in the Floridian aquifer: A Kalman filtering approach”, Water Resources Research, 29(11
NASA Astrophysics Data System (ADS)
Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong
2016-07-01
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
Lampoudi, Sotiria; Gillespie, Dan T; Petzold, Linda R
2009-03-07
The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.
Selected-node stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Duso, Lorenzo; Zechner, Christoph
2018-04-01
Stochastic simulations of biochemical networks are of vital importance for understanding complex dynamics in cells and tissues. However, existing methods to perform such simulations are associated with computational difficulties and addressing those remains a daunting challenge to the present. Here we introduce the selected-node stochastic simulation algorithm (snSSA), which allows us to exclusively simulate an arbitrary, selected subset of molecular species of a possibly large and complex reaction network. The algorithm is based on an analytical elimination of chemical species, thereby avoiding explicit simulation of the associated chemical events. These species are instead described continuously in terms of statistical moments derived from a stochastic filtering equation, resulting in a substantial speedup when compared to Gillespie's stochastic simulation algorithm (SSA). Moreover, we show that statistics obtained via snSSA profit from a variance reduction, which can significantly lower the number of Monte Carlo samples needed to achieve a certain performance. We demonstrate the algorithm using several biological case studies for which the simulation time could be reduced by orders of magnitude.
NASA Astrophysics Data System (ADS)
Li, Hechao
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.
NASA Astrophysics Data System (ADS)
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrotra, Sanjay
2016-09-07
The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting ourmore » main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.« less
NASA Astrophysics Data System (ADS)
Gui, Chun; Zhang, Ruisheng; Zhao, Zhili; Wei, Jiaxuan; Hu, Rongjing
In order to deal with stochasticity in center node selection and instability in community detection of label propagation algorithm, this paper proposes an improved label propagation algorithm named label propagation algorithm based on community belonging degree (LPA-CBD) that employs community belonging degree to determine the number and the center of community. The general process of LPA-CBD is that the initial community is identified by the nodes with the maximum degree, and then it is optimized or expanded by community belonging degree. After getting the rough structure of network community, the remaining nodes are labeled by using label propagation algorithm. The experimental results on 10 real-world networks and three synthetic networks show that LPA-CBD achieves reasonable community number, better algorithm accuracy and higher modularity compared with other four prominent algorithms. Moreover, the proposed algorithm not only has lower algorithm complexity and higher community detection quality, but also improves the stability of the original label propagation algorithm.
Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah
2017-04-01
Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.
NASA Astrophysics Data System (ADS)
Biyanto, T. R.; Matradji; Syamsi, M. N.; Fibrianto, H. Y.; Afdanny, N.; Rahman, A. H.; Gunawan, K. S.; Pratama, J. A. D.; Malwindasari, A.; Abdillah, A. I.; Bethiana, T. N.; Putra, Y. A.
2017-11-01
The development of green building has been growing in both design and quality. The development of green building was limited by the issue of expensive investment. Actually, green building can reduce the energy usage inside the building especially in utilization of cooling system. External load plays major role in reducing the usage of cooling system. External load is affected by type of wall sheathing, glass and roof. The proper selection of wall, type of glass and roof material are very important to reduce external load. Hence, the optimization of energy efficiency and conservation in green building design is required. Since this optimization consist of integer and non-linear equations, this problem falls into Mixed-Integer-Non-Linear-Programming (MINLP) that required global optimization technique such as stochastic optimization algorithms. In this paper the optimized variables i.e. type of glass and roof were chosen using Duelist, Killer-Whale and Rain-Water Algorithms to obtain the optimum energy and considering the minimal investment. The optimization results exhibited the single glass Planibel-G with the 3.2 mm thickness and glass wool insulation provided maximum ROI of 36.8486%, EUI reduction of 54 kWh/m2·year, CO2 emission reduction of 486.8971 tons/year and reduce investment of 4,078,905,465 IDR.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
An efficient hybrid method for stochastic reaction-diffusion biochemical systems with delay
NASA Astrophysics Data System (ADS)
Sayyidmousavi, Alireza; Ilie, Silvana
2017-12-01
Many chemical reactions, such as gene transcription and translation in living cells, need a certain time to finish once they are initiated. Simulating stochastic models of reaction-diffusion systems with delay can be computationally expensive. In the present paper, a novel hybrid algorithm is proposed to accelerate the stochastic simulation of delayed reaction-diffusion systems. The delayed reactions may be of consuming or non-consuming delay type. The algorithm is designed for moderately stiff systems in which the events can be partitioned into slow and fast subsets according to their propensities. The proposed algorithm is applied to three benchmark problems and the results are compared with those of the delayed Inhomogeneous Stochastic Simulation Algorithm. The numerical results show that the new hybrid algorithm achieves considerable speed-up in the run time and very good accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchetti, Luca, E-mail: marchetti@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; University of Trento, Department of Mathematics
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance andmore » accuracy of HRSSA against other state of the art algorithms.« less
Stochastic P-bifurcation and stochastic resonance in a noisy bistable fractional-order system
NASA Astrophysics Data System (ADS)
Yang, J. H.; Sanjuán, Miguel A. F.; Liu, H. G.; Litak, G.; Li, X.
2016-12-01
We investigate the stochastic response of a noisy bistable fractional-order system when the fractional-order lies in the interval (0, 2]. We focus mainly on the stochastic P-bifurcation and the phenomenon of the stochastic resonance. We compare the generalized Euler algorithm and the predictor-corrector approach which are commonly used for numerical calculations of fractional-order nonlinear equations. Based on the predictor-corrector approach, the stochastic P-bifurcation and the stochastic resonance are investigated. Both the fractional-order value and the noise intensity can induce an stochastic P-bifurcation. The fractional-order may lead the stationary probability density function to turn from a single-peak mode to a double-peak mode. However, the noise intensity may transform the stationary probability density function from a double-peak mode to a single-peak mode. The stochastic resonance is investigated thoroughly, according to the linear and the nonlinear response theory. In the linear response theory, the optimal stochastic resonance may occur when the value of the fractional-order is larger than one. In previous works, the fractional-order is usually limited to the interval (0, 1]. Moreover, the stochastic resonance at the subharmonic frequency and the superharmonic frequency are investigated respectively, by using the nonlinear response theory. When it occurs at the subharmonic frequency, the resonance may be strong and cannot be ignored. When it occurs at the superharmonic frequency, the resonance is weak. We believe that the results in this paper might be useful for the signal processing of nonlinear systems.
A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2016-01-01
The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361
A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.
Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao
2016-12-19
The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.
Uncertainty-based Optimization Algorithms in Designing Fractionated Spacecraft
Ning, Xin; Yuan, Jianping; Yue, Xiaokui
2016-01-01
A fractionated spacecraft is an innovative application of a distributive space system. To fully understand the impact of various uncertainties on its development, launch and in-orbit operation, we use the stochastic missioncycle cost to comprehensively evaluate the survivability, flexibility, reliability and economy of the ways of dividing the various modules of the different configurations of fractionated spacecraft. We systematically describe its concept and then analyze its evaluation and optimal design method that exists during recent years and propose the stochastic missioncycle cost for comprehensive evaluation. We also establish the models of the costs such as module development, launch and deployment and the impacts of their uncertainties respectively. Finally, we carry out the Monte Carlo simulation of the complete missioncycle costs of various configurations of the fractionated spacecraft under various uncertainties and give and compare the probability density distribution and statistical characteristics of its stochastic missioncycle cost, using the two strategies of timing module replacement and non-timing module replacement. The simulation results verify the effectiveness of the comprehensive evaluation method and show that our evaluation method can comprehensively evaluate the adaptability of the fractionated spacecraft under different technical and mission conditions. PMID:26964755
Calculation of a double reactive azeotrope using stochastic optimization approaches
NASA Astrophysics Data System (ADS)
Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus
2013-02-01
An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
2008-03-01
investigated, as well as the methodology used . Chapter IV presents the data collection and analysis procedures, and the resulting analysis and...interpolate the data, although a non-interpolating model is possible. For this research Design and Analysis of Computer Experiments (DACE) is used ...followed by the analysis . 4.1. Testing Approach The initial SMOMADS algorithm used for this research was acquired directly from Walston [70]. The
Adaptive Bayes classifiers for remotely sensed data
NASA Technical Reports Server (NTRS)
Raulston, H. S.; Pace, M. O.; Gonzalez, R. C.
1975-01-01
An algorithm is developed for a learning, adaptive, statistical pattern classifier for remotely sensed data. The estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest, and (2) a projection of the parameters in time and space. The results reported are for Gaussian data in which the mean vector of each class may vary with time or position after the classifier is trained.
NASA Astrophysics Data System (ADS)
Schönborn, Jan Boyke; Saalfrank, Peter; Klamroth, Tillmann
2016-01-01
We combine the stochastic pulse optimization (SPO) scheme with the time-dependent configuration interaction singles method in order to control the high frequency response of a simple molecular model system to a tailored femtosecond laser pulse. For this purpose, we use H2 treated in the fixed nuclei approximation. The SPO scheme, as similar genetic algorithms, is especially suited to control highly non-linear processes, which we consider here in the context of high harmonic generation. Here, we will demonstrate that SPO can be used to realize a "non-harmonic" response of H2 to a laser pulse. Specifically, we will show how adding low intensity side frequencies to the dominant carrier frequency of the laser pulse and stochastically optimizing their contribution can create a high-frequency spectral signal of significant intensity, not harmonic to the carrier frequency. At the same time, it is possible to suppress the harmonic signals in the same spectral region, although the carrier frequency is kept dominant during the optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schönborn, Jan Boyke; Saalfrank, Peter; Klamroth, Tillmann, E-mail: klamroth@uni-potsdam.de
2016-01-28
We combine the stochastic pulse optimization (SPO) scheme with the time-dependent configuration interaction singles method in order to control the high frequency response of a simple molecular model system to a tailored femtosecond laser pulse. For this purpose, we use H{sub 2} treated in the fixed nuclei approximation. The SPO scheme, as similar genetic algorithms, is especially suited to control highly non-linear processes, which we consider here in the context of high harmonic generation. Here, we will demonstrate that SPO can be used to realize a “non-harmonic” response of H{sub 2} to a laser pulse. Specifically, we will show howmore » adding low intensity side frequencies to the dominant carrier frequency of the laser pulse and stochastically optimizing their contribution can create a high-frequency spectral signal of significant intensity, not harmonic to the carrier frequency. At the same time, it is possible to suppress the harmonic signals in the same spectral region, although the carrier frequency is kept dominant during the optimization.« less
A disturbance based control/structure design algorithm
NASA Technical Reports Server (NTRS)
Mclaren, Mark D.; Slater, Gary L.
1989-01-01
Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tong; Gu, YuanTong, E-mail: yuantong.gu@qut.edu.au
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grainedmore » level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.« less
Optimal Control for Stochastic Delay Evolution Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Qingxin, E-mail: mqx@hutc.zj.cn; Shen, Yang, E-mail: skyshen87@gmail.com
2016-08-15
In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we applymore » stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.« less
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.
Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258
Brain tumor segmentation in 3D MRIs using an improved Markov random field model
NASA Astrophysics Data System (ADS)
Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza
2011-10-01
Markov Random Field (MRF) models have been recently suggested for MRI brain segmentation by a large number of researchers. By employing Markovianity, which represents the local property, MRF models are able to solve a global optimization problem locally. But they still have a heavy computation burden, especially when they use stochastic relaxation schemes such as Simulated Annealing (SA). In this paper, a new 3D-MRF model is put forward to raise the speed of the convergence. Although, search procedure of SA is fairly localized and prevents from exploring the same diversity of solutions, it suffers from several limitations. In comparison, Genetic Algorithm (GA) has a good capability of global researching but it is weak in hill climbing. Our proposed algorithm combines SA and an improved GA (IGA) to optimize the solution which speeds up the computation time. What is more, this proposed algorithm outperforms the traditional 2D-MRF in quality of the solution.
Modeling Self-Healing of Concrete Using Hybrid Genetic Algorithm-Artificial Neural Network.
Ramadan Suleiman, Ahmed; Nehdi, Moncef L
2017-02-07
This paper presents an approach to predicting the intrinsic self-healing in concrete using a hybrid genetic algorithm-artificial neural network (GA-ANN). A genetic algorithm was implemented in the network as a stochastic optimizing tool for the initial optimal weights and biases. This approach can assist the network in achieving a global optimum and avoid the possibility of the network getting trapped at local optima. The proposed model was trained and validated using an especially built database using various experimental studies retrieved from the open literature. The model inputs include the cement content, water-to-cement ratio (w/c), type and dosage of supplementary cementitious materials, bio-healing materials, and both expansive and crystalline additives. Self-healing indicated by means of crack width is the model output. The results showed that the proposed GA-ANN model is capable of capturing the complex effects of various self-healing agents (e.g., biochemical material, silica-based additive, expansive and crystalline components) on the self-healing performance in cement-based materials.
Malikopoulos, Andreas
2015-01-01
The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.more » Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.« less
A two-stage stochastic rule-based model to determine pre-assembly buffer content
NASA Astrophysics Data System (ADS)
Gunay, Elif Elcin; Kula, Ufuk
2018-01-01
This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.
Simulation of quantum dynamics based on the quantum stochastic differential equation.
Li, Ming
2013-01-01
The quantum stochastic differential equation derived from the Lindblad form quantum master equation is investigated. The general formulation in terms of environment operators representing the quantum state diffusion is given. The numerical simulation algorithm of stochastic process of direct photodetection of a driven two-level system for the predictions of the dynamical behavior is proposed. The effectiveness and superiority of the algorithm are verified by the performance analysis of the accuracy and the computational cost in comparison with the classical Runge-Kutta algorithm.
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G
2007-05-24
The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.
Adaptive statistical pattern classifiers for remotely sensed data
NASA Technical Reports Server (NTRS)
Gonzalez, R. C.; Pace, M. O.; Raulston, H. S.
1975-01-01
A technique for the adaptive estimation of nonstationary statistics necessary for Bayesian classification is developed. The basic approach to the adaptive estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest and (2) a projection of the parameters in time or position. A divergence criterion is developed to monitor algorithm performance. Comparative results of adaptive and nonadaptive classifier tests are presented for simulated four dimensional spectral scan data.
NASA Astrophysics Data System (ADS)
Han, Fei; Cheng, Lin
2017-04-01
The tradable credit scheme (TCS) outperforms congestion pricing in terms of social equity and revenue neutrality, apart from the same perfect performance on congestion mitigation. This article investigates the effectiveness and efficiency of TCS on enhancing transportation network capacity in a stochastic user equilibrium (SUE) modelling framework. First, the SUE and credit market equilibrium conditions are presented; then an equivalent general SUE model with TCS is established by virtue of two constructed functions, which can be further simplified under a specific probability distribution. To enhance the network capacity by utilizing TCS, a bi-level mathematical programming model is established for the optimal TCS design problem, with the upper level optimization objective maximizing network reserve capacity and lower level being the proposed SUE model. The heuristic sensitivity analysis-based algorithm is developed to solve the bi-level model. Three numerical examples are provided to illustrate the improvement effect of TCS on the network in different scenarios.
NASA Astrophysics Data System (ADS)
Son, J.; Medina-Cetina, Z.
2017-12-01
We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.
Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem
Akutsah, Francis; Olusanya, Micheal O.; Adewumi, Aderemi O.
2018-01-01
The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems. PMID:29554662
Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.
Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O
2018-01-01
The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.
Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization
NASA Astrophysics Data System (ADS)
Subramani, Deepak N.; Lermusiaux, Pierre F. J.
2016-04-01
A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.
Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji
2013-04-01
Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.
NASA Astrophysics Data System (ADS)
Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.
2016-12-01
As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS framework to discover key tradeoffs within the LSRB system.
Dynamic remapping of parallel computations with varying resource demands
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Saltz, J. H.
1986-01-01
A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.
NASA Astrophysics Data System (ADS)
Blöcher, Johanna; Kuraz, Michal
2017-04-01
In this contribution we propose implementations of the dual permeability model with different inter-domain exchange descriptions and metaheuristic optimization algorithms for parameter identification and mesh optimization. We compare variants of the coupling term with different numbers of parameters to test if a reduction of parameters is feasible. This can reduce parameter uncertainty in inverse modeling, but also allow for different conceptual models of the domain and matrix coupling. The different variants of the dual permeability model are implemented in the open-source objective library DRUtES written in FORTRAN 2003/2008 in 1D and 2D. For parameter identification we use adaptations of the particle swarm optimization (PSO) and Teaching-learning-based optimization (TLBO), which are population-based metaheuristics with different learning strategies. These are high-level stochastic-based search algorithms that don't require gradient information or a convex search space. Despite increasing computing power and parallel processing, an overly fine mesh is not feasible for parameter identification. This creates the need to find a mesh that optimizes both accuracy and simulation time. We use a bi-objective PSO algorithm to generate a Pareto front of optimal meshes to account for both objectives. The dual permeability model and the optimization algorithms were tested on virtual data and field TDR sensor readings. The TDR sensor readings showed a very steep increase during rapid rainfall events and a subsequent steep decrease. This was theorized to be an effect of artificial macroporous envelopes surrounding TDR sensors creating an anomalous region with distinct local soil hydraulic properties. One of our objectives is to test how well the dual permeability model can describe this infiltration behavior and what coupling term would be most suitable.
Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes
NASA Technical Reports Server (NTRS)
Williams Colin P.
1999-01-01
Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Roh, Min K; Gillespie, Dan T; Petzold, Linda R
2010-11-07
The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.
Adaptive, Distributed Control of Constrained Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.
Stochastic inversion of ocean color data using the cross-entropy method.
Salama, Mhd Suhyb; Shen, Fang
2010-01-18
Improving the inversion of ocean color data is an ever continuing effort to increase the accuracy of derived inherent optical properties. In this paper we present a stochastic inversion algorithm to derive inherent optical properties from ocean color, ship and space borne data. The inversion algorithm is based on the cross-entropy method where sets of inherent optical properties are generated and converged to the optimal set using iterative process. The algorithm is validated against four data sets: simulated, noisy simulated in-situ measured and satellite match-up data sets. Statistical analysis of validation results is based on model-II regression using five goodness-of-fit indicators; only R2 and root mean square of error (RMSE) are mentioned hereafter. Accurate values of total absorption coefficient are derived with R2 > 0.91 and RMSE, of log transformed data, less than 0.55. Reliable values of the total backscattering coefficient are also obtained with R2 > 0.7 (after removing outliers) and RMSE < 0.37. The developed algorithm has the ability to derive reliable results from noisy data with R2 above 0.96 for the total absorption and above 0.84 for the backscattering coefficients. The algorithm is self contained and easy to implement and modify to derive the variability of chlorophyll-a absorption that may correspond to different phytoplankton species. It gives consistently accurate results and is therefore worth considering for ocean color global products.
Low-complexity stochastic modeling of wall-bounded shear flows
NASA Astrophysics Data System (ADS)
Zare, Armin
Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their parabolized equivalents in the receptivity analysis of velocity fluctuations to external sources of excitation as well as capturing the effect of the slowly-varying base flow on streamwise streaks and Tollmien-Schlichting waves. In Part III, we develop a model-based approach to design surface actuation of turbulent channel flow in the form of streamwise traveling waves. This approach is capable of identifying the drag reducing trends of traveling waves in a simulation-free manner. We also use the stochastically forced linearized NS equations to examine the Reynolds number independent effects of spanwise wall oscillations on drag reduction in turbulent channel flows. This allows us to extend the predictive capability of our simulation-free approach to high Reynolds numbers.
Evaluation of stochastic differential equation approximation of ion channel gating models.
Bruce, Ian C
2009-04-01
Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.
Relative frequencies of constrained events in stochastic processes: An analytical approach.
Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C
2015-10-01
The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
Optimality, stochasticity, and variability in motor behavior
Guigon, Emmanuel; Baraduc, Pierre; Desmurget, Michel
2008-01-01
Recent theories of motor control have proposed that the nervous system acts as a stochastically optimal controller, i.e. it plans and executes motor behaviors taking into account the nature and statistics of noise. Detrimental effects of noise are converted into a principled way of controlling movements. Attractive aspects of such theories are their ability to explain not only characteristic features of single motor acts, but also statistical properties of repeated actions. Here, we present a critical analysis of stochastic optimality in motor control which reveals several difficulties with this hypothesis. We show that stochastic control may not be necessary to explain the stochastic nature of motor behavior, and we propose an alternative framework, based on the action of a deterministic controller coupled with an optimal state estimator, which relieves drawbacks of stochastic optimality and appropriately explains movement variability. PMID:18202922
Incorporating uncertainty and motion in Intensity Modulated Radiation Therapy treatment planning
NASA Astrophysics Data System (ADS)
Martin, Benjamin Charles
In radiation therapy, one seeks to destroy a tumor while minimizing the damage to surrounding healthy tissue. Intensity Modulated Radiation Therapy (IMRT) uses overlapping beams of x-rays that add up to a high dose within the target and a lower dose in the surrounding healthy tissue. IMRT relies on optimization techniques to create high quality treatments. Unfortunately, the possible conformality is limited by the need to ensure coverage even if there is organ movement or deformation. Currently, margins are added around the tumor to ensure coverage based on an assumed motion range. This approach does not ensure high quality treatments. In the standard IMRT optimization problem, an objective function measures the deviation of the dose from the clinical goals. The optimization then finds the beamlet intensities that minimize the objective function. When modeling uncertainty, the dose delivered from a given set of beamlet intensities is a random variable. Thus the objective function is also a random variable. In our stochastic formulation we minimize the expected value of this objective function. We developed a problem formulation that is both flexible and fast enough for use on real clinical cases. While working on accelerating the stochastic optimization, we developed a technique of voxel sampling. Voxel sampling is a randomized algorithms approach to a steepest descent problem based on estimating the gradient by only calculating the dose to a fraction of the voxels within the patient. When combined with an automatic sampling rate adaptation technique, voxel sampling produced an order of magnitude speed up in IMRT optimization. We also develop extensions of our results to Intensity Modulated Proton Therapy (IMPT). Due to the physics of proton beams the stochastic formulation yields visibly different and better plans than normal optimization. The results of our research have been incorporated into a software package OPT4D, which is an IMRT and IMPT optimization tool that we developed.
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiu, Dongbin
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
Wang, Lingling; Fu, Li
2018-01-01
In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323
NASA Astrophysics Data System (ADS)
Kloss, Sebastian; Schuetze, Niels; Schmitz, Gerd H.
2010-05-01
The strong competition for fresh water in order to fulfill the increased demand for food worldwide has led to a renewed interest in techniques to improve water use efficiency (WUE) such as controlled deficit irrigation. Furthermore, as the implementation of crop models into complex decision support systems becomes more and more common, it is imperative to reliably predict the WUE as ratio of water consumption and yield. The objective of this paper is the assessment of the problems the crop models - such as FAO-33, DAISY, and APSIM in this study - face when maximizing the WUE. We applied these crop models for calculating the risk in yield reduction in view of different sources of uncertainty (e.g. climate) employing a stochastic framework for decision support for the planning of water supply in irrigation. The stochastic framework consists of: (i) a weather generator for simulating regional impacts of climate change; (ii) a new tailor-made evolutionary optimization algorithm for optimal irrigation scheduling with limited water supply; and (iii) the above mentioned models for simulating water transport and crop growth in a sound manner. The results present stochastic crop water production functions (SCWPF) for different crops which can be used as basic tools for assessing the impact of climate variability on the risk for the potential yield. Case studies from India, Oman, Malawi, and France are presented to assess the differences in modeling water stress and yield response for the different crop models.
A tool for simulating parallel branch-and-bound methods
NASA Astrophysics Data System (ADS)
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
Alien Genetic Algorithm for Exploration of Search Space
NASA Astrophysics Data System (ADS)
Patel, Narendra; Padhiyar, Nitin
2010-10-01
Genetic Algorithm (GA) is a widely accepted population based stochastic optimization technique used for single and multi objective optimization problems. Various versions of modifications in GA have been proposed in last three decades mainly addressing two issues, namely increasing convergence rate and increasing probability of global minima. While both these. While addressing the first issue, GA tends to converge to a local optima and addressing the second issue corresponds the large computational efforts. Thus, to reduce the contradictory effects of these two aspects, we propose a modification in GA by adding an alien member in the population at every generation. Addition of an Alien member in the current population at every generation increases the probability of obtaining global minima at the same time maintaining higher convergence rate. With two test cases, we have demonstrated the efficacy of the proposed GA by comparing with the conventional GA.
Chen, Bailian; Reynolds, Albert C.
2018-03-11
We report that CO 2 water-alternating-gas (WAG) injection is an enhanced oil recovery method designed to improve sweep efficiency during CO 2 injection with the injected water to control the mobility of CO 2 and to stabilize the gas front. Optimization of CO 2 -WAG injection is widely regarded as a viable technique for controlling the CO 2 and oil miscible process. Poor recovery from CO 2 -WAG injection can be caused by inappropriately designed WAG parameters. In previous study (Chen and Reynolds, 2016), we proposed an algorithm to optimize the well controls which maximize the life-cycle net-present-value (NPV). However,more » the effect of injection half-cycle lengths for each injector on oil recovery or NPV has not been well investigated. In this paper, an optimization framework based on augmented Lagrangian method and the newly developed stochastic-simplex-approximate-gradient (StoSAG) algorithm is proposed to explore the possibility of simultaneous optimization of the WAG half-cycle lengths together with the well controls. Finally, the proposed framework is demonstrated with three reservoir examples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Bailian; Reynolds, Albert C.
We report that CO 2 water-alternating-gas (WAG) injection is an enhanced oil recovery method designed to improve sweep efficiency during CO 2 injection with the injected water to control the mobility of CO 2 and to stabilize the gas front. Optimization of CO 2 -WAG injection is widely regarded as a viable technique for controlling the CO 2 and oil miscible process. Poor recovery from CO 2 -WAG injection can be caused by inappropriately designed WAG parameters. In previous study (Chen and Reynolds, 2016), we proposed an algorithm to optimize the well controls which maximize the life-cycle net-present-value (NPV). However,more » the effect of injection half-cycle lengths for each injector on oil recovery or NPV has not been well investigated. In this paper, an optimization framework based on augmented Lagrangian method and the newly developed stochastic-simplex-approximate-gradient (StoSAG) algorithm is proposed to explore the possibility of simultaneous optimization of the WAG half-cycle lengths together with the well controls. Finally, the proposed framework is demonstrated with three reservoir examples.« less
Kernel-Based Approximate Dynamic Programming Using Bellman Residual Elimination
2010-02-01
framework is the ability to utilize stochastic system models, thereby allowing the system to make sound decisions even if there is randomness in the system ...approximate policy when a system model is unavailable. We present theoretical analysis of all BRE algorithms proving convergence to the optimal policy in...policies based on MDPs is that there may be parameters of the system model that are poorly known and/or vary with time as the system operates. System
NASA Technical Reports Server (NTRS)
Trosset, Michael W.
1999-01-01
Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Evolutionary algorithms for the optimization of advective control of contaminated aquifer zones
NASA Astrophysics Data System (ADS)
Bayer, Peter; Finkel, Michael
2004-06-01
Simple genetic algorithms (SGAs) and derandomized evolution strategies (DESs) are employed to adapt well capture zones for the hydraulic optimization of pump-and-treat systems. A hypothetical contaminant site in a heterogeneous aquifer serves as an application template. On the basis of the results from numerical flow modeling, particle tracking is applied to delineate the pathways of the contaminants. The objective is to find the minimum pumping rate of up to eight recharge wells within a downgradient well placement area. Both the well coordinates and the pumping rates are subject to optimization, leading to a mixed discrete-continuous problem. This article discusses the ideal formulation of the objective function for which the number of particles and the total pumping rate are used as decision criteria. Boundary updating is introduced, which enables the reorganization of the decision space limits by the incorporation of experience from previous optimization runs. Throughout the study the algorithms' capabilities are evaluated in terms of the number of model runs which are needed to identify optimal and suboptimal solutions. Despite the complexity of the problem both evolutionary algorithm variants prove to be suitable for finding suboptimal solutions. The DES with weighted recombination reveals to be the ideal algorithm to find optimal solutions. Though it works with real-coded decision parameters, it proves to be suitable for adjusting discrete well positions. Principally, the representation of well positions as binary strings in the SGA is ideal. However, even if the SGA takes advantage of bookkeeping, the vital high discretization of pumping rates results in long binary strings, which escalates the model runs that are needed to find an optimal solution. Since the SGA string lengths increase with the number of wells, the DES gains superiority, particularly for an increasing number of wells. As the DES is a self-adaptive algorithm, it proves to be the more robust optimization method for the selected advective control problem than the SGA variants of this study, exhibiting a less stochastic search which is reflected in the minor variability of the found solutions.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa
2010-02-21
We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.
NASA Astrophysics Data System (ADS)
Roy, Soumen; Sengupta, Anand S.; Thakor, Nilay
2017-05-01
Astrophysical compact binary systems consisting of neutron stars and black holes are an important class of gravitational wave (GW) sources for advanced LIGO detectors. Accurate theoretical waveform models from the inspiral, merger, and ringdown phases of such systems are used to filter detector data under the template-based matched-filtering paradigm. An efficient grid over the parameter space at a fixed minimal match has a direct impact on the overall time taken by these searches. We present a new hybrid geometric-random template placement algorithm for signals described by parameters of two masses and one spin magnitude. Such template banks could potentially be used in GW searches from binary neutron stars and neutron star-black hole systems. The template placement is robust and is able to automatically accommodate curvature and boundary effects with no fine-tuning. We also compare these banks against vanilla stochastic template banks and show that while both are equally efficient in the fitting-factor sense, the bank sizes are ˜25 % larger in the stochastic method. Further, we show that the generation of the proposed hybrid banks can be sped up by nearly an order of magnitude over the stochastic bank. Generic issues related to optimal implementation are discussed in detail. These improvements are expected to directly reduce the computational cost of gravitational wave searches.
Koh, Wonryull; Blackwell, Kim T
2011-04-21
Stochastic simulation of reaction-diffusion systems enables the investigation of stochastic events arising from the small numbers and heterogeneous distribution of molecular species in biological cells. Stochastic variations in intracellular microdomains and in diffusional gradients play a significant part in the spatiotemporal activity and behavior of cells. Although an exact stochastic simulation that simulates every individual reaction and diffusion event gives a most accurate trajectory of the system's state over time, it can be too slow for many practical applications. We present an accelerated algorithm for discrete stochastic simulation of reaction-diffusion systems designed to improve the speed of simulation by reducing the number of time-steps required to complete a simulation run. This method is unique in that it employs two strategies that have not been incorporated in existing spatial stochastic simulation algorithms. First, diffusive transfers between neighboring subvolumes are based on concentration gradients. This treatment necessitates sampling of only the net or observed diffusion events from higher to lower concentration gradients rather than sampling all diffusion events regardless of local concentration gradients. Second, we extend the non-negative Poisson tau-leaping method that was originally developed for speeding up nonspatial or homogeneous stochastic simulation algorithms. This method calculates each leap time in a unified step for both reaction and diffusion processes while satisfying the leap condition that the propensities do not change appreciably during the leap and ensuring that leaping does not cause molecular populations to become negative. Numerical results are presented that illustrate the improvement in simulation speed achieved by incorporating these two new strategies.
Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control
NASA Astrophysics Data System (ADS)
Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.
2016-02-01
A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
Issues and Strategies in Solving Multidisciplinary Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya
2013-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Clustered-dot halftoning with direct binary search.
Goyal, Puneet; Gupta, Madhur; Staelin, Carl; Fischer, Mani; Shacham, Omri; Allebach, Jan P
2013-02-01
In this paper, we present a new algorithm for aperiodic clustered-dot halftoning based on direct binary search (DBS). The DBS optimization framework has been modified for designing clustered-dot texture, by using filters with different sizes in the initialization and update steps of the algorithm. Following an intuitive explanation of how the clustered-dot texture results from this modified framework, we derive a closed-form cost metric which, when minimized, equivalently generates stochastic clustered-dot texture. An analysis of the cost metric and its influence on the texture quality is presented, which is followed by a modification to the cost metric to reduce computational cost and to make it more suitable for screen design.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
Detectability Thresholds and Optimal Algorithms for Community Structure in Dynamic Networks
NASA Astrophysics Data System (ADS)
Ghasemian, Amir; Zhang, Pan; Clauset, Aaron; Moore, Cristopher; Peel, Leto
2016-07-01
The detection of communities within a dynamic network is a common means for obtaining a coarse-grained view of a complex system and for investigating its underlying processes. While a number of methods have been proposed in the machine learning and physics literature, we lack a theoretical analysis of their strengths and weaknesses, or of the ultimate limits on when communities can be detected. Here, we study the fundamental limits of detecting community structure in dynamic networks. Specifically, we analyze the limits of detectability for a dynamic stochastic block model where nodes change their community memberships over time, but where edges are generated independently at each time step. Using the cavity method, we derive a precise detectability threshold as a function of the rate of change and the strength of the communities. Below this sharp threshold, we claim that no efficient algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this threshold. The first uses belief propagation, which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the belief propagation equations. These results extend our understanding of the limits of community detection in an important direction, and introduce new mathematical tools for similar extensions to networks with other types of auxiliary information.
NASA Astrophysics Data System (ADS)
Mallick, S.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2016-07-01
This paper proposes a novel hybrid optimisation algorithm which combines the recently proposed evolutionary algorithm Backtracking Search Algorithm (BSA) with another widely accepted evolutionary algorithm, namely, Differential Evolution (DE). The proposed algorithm called BSA-DE is employed for the optimal designs of two commonly used analogue circuits, namely Complementary Metal Oxide Semiconductor (CMOS) differential amplifier circuit with current mirror load and CMOS two-stage operational amplifier (op-amp) circuit. BSA has a simple structure that is effective, fast and capable of solving multimodal problems. DE is a stochastic, population-based heuristic approach, having the capability to solve global optimisation problems. In this paper, the transistors' sizes are optimised using the proposed BSA-DE to minimise the areas occupied by the circuits and to improve the performances of the circuits. The simulation results justify the superiority of BSA-DE in global convergence properties and fine tuning ability, and prove it to be a promising candidate for the optimal design of the analogue CMOS amplifier circuits. The simulation results obtained for both the amplifier circuits prove the effectiveness of the proposed BSA-DE-based approach over DE, harmony search (HS), artificial bee colony (ABC) and PSO in terms of convergence speed, design specifications and design parameters of the optimal design of the analogue CMOS amplifier circuits. It is shown that BSA-DE-based design technique for each amplifier circuit yields the least MOS transistor area, and each designed circuit is shown to have the best performance parameters such as gain, power dissipation, etc., as compared with those of other recently reported literature.
Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.
2013-01-01
Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512
Modeling multilayer x-ray reflectivity using genetic algorithms
NASA Astrophysics Data System (ADS)
Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.
2000-06-01
The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
2012-03-01
0-486-41183-4. 15. Brown , Robert G. and Patrick Y. C. Hwang . Introduction to Random Signals and Applied Kalman Filtering. Wiley, New York, 1996. ISBN...stability and perfor- mance criteria. In the 1960’s, Kalman introduced the Linear Quadratic Regulator (LQR) method using an integral performance index...feedback of the state variables and was able to apply this method to time-varying and Multi-Input Multi-Output (MIMO) systems. Kalman further showed
Ma, Xingkun; Huang, Lei; Bian, Qi; Gong, Mali
2014-09-10
The wavefront correction ability of a deformable mirror with a multireflection waveguide was investigated and compared via simulations. By dividing a conventional actuator array into a multireflection waveguide that consisted of single-actuator units, an arbitrary actuator pattern could be achieved. A stochastic parallel perturbation algorithm was proposed to find the optimal actuator pattern for a particular aberration. Compared with conventional an actuator array, the multireflection waveguide showed significant advantages in correction of higher order aberrations.
Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad
2018-02-01
The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Stochastic Models of Polymer Systems
2016-01-01
SECURITY CLASSIFICATION OF: The stochastic gradient decent algorithm is the now the "algorithm of choice" for very large machine learning problems...information about the behavior of the algorithm. At the same time, we were also able to formulate various acceleration techniques in precise math terms... gradient decent, REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8. PERFORMING
Optimal reservoir operation policies using novel nested algorithms
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri
2015-04-01
Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested optimization algorithm into the state transition that lowers the starting problem dimension and alleviates the curse of dimensionality. The algorithms can solve multi-objective optimization problems, without significantly increasing the complexity and the computational expenses. The algorithms can handle dense and irregular variable discretization, and are coded in Java as prototype applications. The three algorithms were tested at the multipurpose reservoir Knezevo of the Zletovica hydro-system located in the Republic of Macedonia, with eight objectives, including urban water supply, agriculture, ensuring ecological flow, and generation of hydropower. Because the Zletovica hydro-system is relatively complex, the novel algorithms were pushed to their limits, demonstrating their capabilities and limitations. The nSDP and nRL derived/learned the optimal reservoir policy using 45 (1951-1995) years historical data. The nSDP and nRL optimal reservoir policy was tested on 10 (1995-2005) years historical data, and compared with nDP optimal reservoir operation in the same period. The nested algorithms and optimal reservoir operation results are analysed and explained.
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-01-01
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-08-21
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.
NASA Astrophysics Data System (ADS)
Arias, E.; Florez, E.; Pérez-Torres, J. F.
2017-06-01
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu7, Cu9, and Cu11 as benchmark systems, and Cu38 and Ni9 as novel systems. New equilibrium structures for Cu9, Cu11, Cu38, and Ni9 are reported.
Arias, E; Florez, E; Pérez-Torres, J F
2017-06-28
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu 7 , Cu 9 , and Cu 11 as benchmark systems, and Cu 38 and Ni 9 as novel systems. New equilibrium structures for Cu 9 , Cu 11 , Cu 38 , and Ni 9 are reported.
Multiscale Hy3S: hybrid stochastic simulation for supercomputers.
Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2006-02-24
Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.
NASA Astrophysics Data System (ADS)
Zhu, Z. W.; Zhang, W. D.; Xu, J.
2014-03-01
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.
Optimal Control Inventory Stochastic With Production Deteriorating
NASA Astrophysics Data System (ADS)
Affandi, Pardi
2018-01-01
In this paper, we are using optimal control approach to determine the optimal rate in production. Most of the inventory production models deal with a single item. First build the mathematical models inventory stochastic, in this model we also assume that the items are in the same store. The mathematical model of the problem inventory can be deterministic and stochastic models. In this research will be discussed how to model the stochastic as well as how to solve the inventory model using optimal control techniques. The main tool in the study problems for the necessary optimality conditions in the form of the Pontryagin maximum principle involves the Hamilton function. So we can have the optimal production rate in a production inventory system where items are subject deterioration.
Multidimensional stochastic approximation using locally contractive functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
NASA Astrophysics Data System (ADS)
Serrat-Capdevila, A.; Valdes, J. B.
2005-12-01
An optimization approach for the operation of international multi-reservoir systems is presented. The approach uses Stochastic Dynamic Programming (SDP) algorithms, both steady-state and real-time, to develop two models. In the first model, the reservoirs and flows of the system are aggregated to yield an equivalent reservoir, and the obtained operating policies are disaggregated using a non-linear optimization procedure for each reservoir and for each nation water balance. In the second model a multi-reservoir approach is applied, disaggregating the releases for each country water share in each reservoir. The non-linear disaggregation algorithm uses SDP-derived operating policies as boundary conditions for a local time-step optimization. Finally, the performance of the different approaches and methods is compared. These models are applied to the Amistad-Falcon International Reservoir System as part of a binational dynamic modeling effort to develop a decision support system tool for a better management of the water resources in the Lower Rio Grande Basin, currently enduring a severe drought.
A path following algorithm for the graph matching problem.
Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe
2009-12-01
We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.
Othman, Faridah; Taghieh, Mahmood
2016-01-01
Optimal operation of water resources in multiple and multipurpose reservoirs is very complicated. This is because of the number of dams, each dam’s location (Series and parallel), conflict in objectives and the stochastic nature of the inflow of water in the system. In this paper, performance optimization of the system of Karun and Dez reservoir dams have been studied and investigated with the purposes of hydroelectric energy generation and providing water demand in 6 dams. On the Karun River, 5 dams have been built in the series arrangements, and the Dez dam has been built parallel to those 5 dams. One of the main achievements in this research is the implementation of the structure of production of hydroelectric energy as a function of matrix in MATLAB software. The results show that the role of objective function structure for generating hydroelectric energy in weighting method algorithm is more important than water supply. Nonetheless by implementing ε- constraint method algorithm, we can both increase hydroelectric power generation and supply around 85% of agricultural and industrial demands. PMID:27248152
A Design Problem of Assembly Line Systems using Genetic Algorithm under the BTO Environment
NASA Astrophysics Data System (ADS)
Abe, Kazuaki; Yamada, Tetsuo; Matsui, Masayuki
Under the BTO environment, stochastic assembly lines require design methods which shorten not only the production lead time but also the ready time for the line design. We propose a design method for Assembly Line Systems (ALS) in Yamada et al. (2001) by using Genetic Algorithm (GA) and Adam-Eve GA, in which all design variables are determined in consideration of constraints such as line length related to the production lead time. First, an ALS model with a line length constraint is introduced, and an optimal design problem is set to maximize the net reward under shorter lead time. Next, a simulation optimization method is developed using Adam-Eve GA and traditional GA. Finally, an optimal design example is shown and discussed by comparing the 2-stage design by Yamada et al. (2001) and both the GA designs. It is shown that the Adam-Eve GA is superior to the traditional GA design in terms of computational time though there is only a slight difference in terms of net reward.
Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J
2016-12-01
This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.
NASA Astrophysics Data System (ADS)
Molde, H.; Zwick, D.; Muskulus, M.
2014-12-01
Support structures for offshore wind turbines are contributing a large part to the total project cost, and a cost saving of a few percent would have considerable impact. At present support structures are designed with simplified methods, e.g., spreadsheet analysis, before more detailed load calculations are performed. Due to the large number of loadcases only a few semimanual design iterations are typically executed. Computer-assisted optimization algorithms could help to further explore design limits and avoid unnecessary conservatism. In this study the simultaneous perturbation stochastic approximation method developed by Spall in the 1990s was assessed with respect to its suitability for support structure optimization. The method depends on a few parameters and an objective function that need to be chosen carefully. In each iteration the structure is evaluated by time-domain analyses, and joint fatigue lifetimes and ultimate strength utilization are computed from stress concentration factors. A pseudo-gradient is determined from only two analysis runs and the design is adjusted in the direction that improves it the most. The algorithm is able to generate considerably improved designs, compared to other methods, in a few hundred iterations, which is demonstrated for the NOWITECH 10 MW reference turbine.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-07-25
This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
An improved stochastic fractal search algorithm for 3D protein structure prediction.
Zhou, Changjun; Sun, Chuan; Wang, Bin; Wang, Xiaojun
2018-05-03
Protein structure prediction (PSP) is a significant area for biological information research, disease treatment, and drug development and so on. In this paper, three-dimensional structures of proteins are predicted based on the known amino acid sequences, and the structure prediction problem is transformed into a typical NP problem by an AB off-lattice model. This work applies a novel improved Stochastic Fractal Search algorithm (ISFS) to solve the problem. The Stochastic Fractal Search algorithm (SFS) is an effective evolutionary algorithm that performs well in exploring the search space but falls into local minimums sometimes. In order to avoid the weakness, Lvy flight and internal feedback information are introduced in ISFS. In the experimental process, simulations are conducted by ISFS algorithm on Fibonacci sequences and real peptide sequences. Experimental results prove that the ISFS performs more efficiently and robust in terms of finding the global minimum and avoiding getting stuck in local minimums.
NASA Astrophysics Data System (ADS)
Zakynthinaki, M. S.; Barakat, R. O.; Cordente Martínez, C. A.; Sampedro Molinuevo, J.
2011-03-01
The stochastic optimization method ALOPEX IV has been successfully applied to the problem of detecting possible changes in the maternal heart rate kinetics during pregnancy. For this reason, maternal heart rate data were recorded before, during and after gestation, during sessions of exercises of constant mild intensity; ALOPEX IV stochastic optimization was used to calculate the parameter values that optimally fit a dynamical systems model to the experimental data. The results not only demonstrate the effectiveness of ALOPEX IV stochastic optimization, but also have important implications in the area of exercise physiology, as they reveal important changes in the maternal cardiovascular dynamics, as a result of pregnancy.
Optimizing an experimental design for an electromagnetic experiment
NASA Astrophysics Data System (ADS)
Roux, Estelle; Garcia, Xavier
2013-04-01
Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.
Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems
Fonseca Guerra, Gabriel A.; Furber, Steve B.
2017-01-01
Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart. PMID:29311791
An Approach to Economic Dispatch with Multiple Fuels Based on Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Sriyanyong, Pichet
2011-06-01
Particle Swarm Optimization (PSO), a stochastic optimization technique, shows superiority to other evolutionary computation techniques in terms of less computation time, easy implementation with high quality solution, stable convergence characteristic and independent from initialization. For this reason, this paper proposes the application of PSO to the Economic Dispatch (ED) problem, which occurs in the operational planning of power systems. In this study, ED problem can be categorized according to the different characteristics of its cost function that are ED problem with smooth cost function and ED problem with multiple fuels. Taking the multiple fuels into account will make the problem more realistic. The experimental results show that the proposed PSO algorithm is more efficient than previous approaches under consideration as well as highly promising in real world applications.
Removing Barriers for Effective Deployment of Intermittent Renewable Generation
NASA Astrophysics Data System (ADS)
Arabali, Amirsaman
The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.
NASA Astrophysics Data System (ADS)
Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin
2009-10-01
Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.
Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio
2011-12-01
The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.
On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Mather, Barry
This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less
Vesapogu, Joshi Manohar; Peddakotla, Sujatha; Kuppa, Seetha Rama Anjaneyulu
2013-01-01
With the advancements in semiconductor technology, high power medium voltage (MV) Drives are extensively used in numerous industrial applications. Challenging technical requirements of MV Drives is to control multilevel inverter (MLI) with less Total harmonic distortion (%THD) which satisfies IEEE standard 519-1992 harmonic guidelines and less switching losses. Among all modulation control strategies for MLI, Selective harmonic elimination (SHE) technique is one of the traditionally preferred modulation control technique at fundamental switching frequency with better harmonic profile. On the other hand, the equations which are formed by SHE technique are highly non-linear in nature, may exist multiple, single or even no solution at particular modulation index (MI). However, in some MV Drive applications, it is required to operate over a range of MI. Providing analytical solutions for SHE equations during the whole range of MI from 0 to 1, has been a challenging task for researchers. In this paper, an attempt is made to solve SHE equations by using deterministic and stochastic optimization methods and comparative harmonic analysis has been carried out. An effective algorithm which minimizes %THD with less computational effort among all optimization algorithms has been presented. To validate the effectiveness of proposed MPSO technique, an experiment is carried out on a low power proto type of three phase CHB 11- level Inverter using FPGA based Xilinx's Spartan -3A DSP Controller. The experimental results proved that MPSO technique has successfully solved SHE equations over all range of MI from 0 to 1, the %THD obtained over major range of MI also satisfies IEEE 519-1992 harmonic guidelines too.
Exact and approximate stochastic simulation of intracellular calcium dynamics.
Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von
2011-01-01
In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.
A random optimization approach for inherent optic properties of nearshore waters
NASA Astrophysics Data System (ADS)
Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng
2016-10-01
Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.
Probabilistic DHP adaptive critic for nonlinear stochastic control systems.
Herzallah, Randa
2013-06-01
Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Computational singular perturbation analysis of stochastic chemical systems with stiffness
NASA Astrophysics Data System (ADS)
Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.
2017-04-01
Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.
Wu, Sheng; Li, Hong; Petzold, Linda R.
2015-01-01
The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185
Predictability of the Lagrangian Motion in the Upper Ocean
NASA Astrophysics Data System (ADS)
Piterbarg, L. I.; Griffa, A.; Griffa, A.; Mariano, A. J.; Ozgokmen, T. M.; Ryan, E. H.
2001-12-01
The complex non-linear dynamics of the upper ocean leads to chaotic behavior of drifter trajectories in the ocean. Our study is focused on estimating the predictability limit for the position of an individual Lagrangian particle or a particle cluster based on the knowledge of mean currents and observations of nearby particles (predictors). The Lagrangian prediction problem, besides being a fundamental scientific problem, is also of great importance for practical applications such as search and rescue operations and for modeling the spread of fish larvae. A stochastic multi-particle model for the Lagrangian motion has been rigorously formulated and is a generalization of the well known "random flight" model for a single particle. Our model is mathematically consistent and includes a few easily interpreted parameters, such as the Lagrangian velocity decorrelation time scale, the turbulent velocity variance, and the velocity decorrelation radius, that can be estimated from data. The top Lyapunov exponent for an isotropic version of the model is explicitly expressed as a function of these parameters enabling us to approximate the predictability limit to first order. Lagrangian prediction errors for two new prediction algorithms are evaluated against simple algorithms and each other and are used to test the predictability limits of the stochastic model for isotropic turbulence. The first algorithm is based on a Kalman filter and uses the developed stochastic model. Its implementation for drifter clusters in both the Tropical Pacific and Adriatic Sea, showed good prediction skill over a period of 1-2 weeks. The prediction error is primarily a function of the data density, defined as the number of predictors within a velocity decorrelation spatial scale from the particle to be predicted. The second algorithm is model independent and is based on spatial regression considerations. Preliminary results, based on simulated, as well as, real data, indicate that it performs better than the Kalman-based algorithm in strong shear flows. An important component of our research is the optimal predictor location problem; Where should floats be launched in order to minimize the Lagrangian prediction error? Preliminary Lagrangian sampling results for different flow scenarios will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com
2014-03-15
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less
A computational framework for prime implicants identification in noncoherent dynamic systems.
Di Maio, Francesco; Baronchelli, Samuele; Zio, Enrico
2015-01-01
Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time-dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple-valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases. © 2014 Society for Risk Analysis.
Fast cooling for a system of stochastic oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yongxin, E-mail: chen2468@umn.edu; Georgiou, Tryphon T., E-mail: tryphon@umn.edu; Pavon, Michele, E-mail: pavon@math.unipd.it
2015-11-15
We study feedback control of coupled nonlinear stochastic oscillators in a force field. We first consider the problem of asymptotically driving the system to a desired steady state corresponding to reduced thermal noise. Among the feedback controls achieving the desired asymptotic transfer, we find that the most efficient one from an energy point of view is characterized by time-reversibility. We also extend the theory of Schrödinger bridges to this model, thereby steering the system in finite time and with minimum effort to a target steady-state distribution. The system can then be maintained in this state through the optimal steady-state feedbackmore » control. The solution, in the finite-horizon case, involves a space-time harmonic function φ, and −logφ plays the role of an artificial, time-varying potential in which the desired evolution occurs. This framework appears extremely general and flexible and can be viewed as a considerable generalization of existing active control strategies such as macromolecular cooling. In the case of a quadratic potential, the results assume a form particularly attractive from the algorithmic viewpoint as the optimal control can be computed via deterministic matricial differential equations. An example involving inertial particles illustrates both transient and steady state optimal feedback control.« less
Wind farm optimization using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Ituarte-Villarreal, Carlos M.
In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both the discount rate and the climatic patterns on optimal harvest strategics. In general, decreases in either the discount rate or in the frequency of favorable weather patterns lcd to a more conservative defoliation policy. This did not hold, however, for plants in states of low vigor. Optimal control for shadscale and winterfat tended to stabilize on a policy of heavy defoliation stress, followed by one or more seasons of rest. Big sagebrush required a policy of heavy summer defoliation when sufficient active shoot material is present at the beginning of the growing season. The comparison of fixed and optimal strategies indicated considerable improvement in defoliation yields when optimal strategies are followed. The superior performance was attributable to increased defoliation of plants in states of high vigor. Improvements were found for both discounted and undiscounted yields.
Visual Tracking via Sparse and Local Linear Coding.
Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan
2015-11-01
The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.
Stochastic Optimization for an Analytical Model of Saltwater Intrusion in Coastal Aquifers
Stratis, Paris N.; Karatzas, George P.; Papadopoulou, Elena P.; Zakynthinaki, Maria S.; Saridakis, Yiannis G.
2016-01-01
The present study implements a stochastic optimization technique to optimally manage freshwater pumping from coastal aquifers. Our simulations utilize the well-known sharp interface model for saltwater intrusion in coastal aquifers together with its known analytical solution. The objective is to maximize the total volume of freshwater pumped by the wells from the aquifer while, at the same time, protecting the aquifer from saltwater intrusion. In the direction of dealing with this problem in real time, the ALOPEX stochastic optimization method is used, to optimize the pumping rates of the wells, coupled with a penalty-based strategy that keeps the saltwater front at a safe distance from the wells. Several numerical optimization results, that simulate a known real aquifer case, are presented. The results explore the computational performance of the chosen stochastic optimization method as well as its abilities to manage freshwater pumping in real aquifer environments. PMID:27689362
Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Sen, S. K.
2007-01-01
Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
NASA Technical Reports Server (NTRS)
Johnson, E. H.
1975-01-01
The optimal design was investigated of simple structures subjected to dynamic loads, with constraints on the structures' responses. Optimal designs were examined for one dimensional structures excited by harmonically oscillating loads, similar structures excited by white noise, and a wing in the presence of continuous atmospheric turbulence. The first has constraints on the maximum allowable stress while the last two place bounds on the probability of failure of the structure. Approximations were made to replace the time parameter with a frequency parameter. For the first problem, this involved the steady state response, and in the remaining cases, power spectral techniques were employed to find the root mean square values of the responses. Optimal solutions were found by using computer algorithms which combined finite elements methods with optimization techniques based on mathematical programming. It was found that the inertial loads for these dynamic problems result in optimal structures that are radically different from those obtained for structures loaded statically by forces of comparable magnitude.
Estimator banks: a new tool for direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Gershman, Alex B.; Boehme, Johann F.
1997-10-01
A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.
Control design for robust stability in linear regulators: Application to aerospace flight control
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1986-01-01
Time domain stability robustness analysis and design for linear multivariable uncertain systems with bounded uncertainties is the central theme of the research. After reviewing the recently developed upper bounds on the linear elemental (structured), time varying perturbation of an asymptotically stable linear time invariant regulator, it is shown that it is possible to further improve these bounds by employing state transformations. Then introducing a quantitative measure called the stability robustness index, a state feedback conrol design algorithm is presented for a general linear regulator problem and then specialized to the case of modal systems as well as matched systems. The extension of the algorithm to stochastic systems with Kalman filter as the state estimator is presented. Finally an algorithm for robust dynamic compensator design is presented using Parameter Optimization (PO) procedure. Applications in a aircraft control and flexible structure control are presented along with a comparison with other existing methods.
Feedback control for unsteady flow and its application to the stochastic Burgers equation
NASA Technical Reports Server (NTRS)
Choi, Haecheon; Temam, Roger; Moin, Parviz; Kim, John
1993-01-01
The study applies mathematical methods of control theory to the problem of control of fluid flow with the long-range objective of developing effective methods for the control of turbulent flows. Model problems are employed through the formalism and language of control theory to present the procedure of how to cast the problem of controlling turbulence into a problem in optimal control theory. Methods of calculus of variations through the adjoint state and gradient algorithms are used to present a suboptimal control and feedback procedure for stationary and time-dependent problems. Two types of controls are investigated: distributed and boundary controls. Several cases of both controls are numerically simulated to investigate the performances of the control algorithm. Most cases considered show significant reductions of the costs to be minimized. The dependence of the control algorithm on the time-descretization method is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atanassov, E.; Dimitrov, D., E-mail: d.slavov@bas.bg, E-mail: emanouil@parallel.bas.bg, E-mail: gurov@bas.bg; Gurov, T.
2015-10-28
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for optionmore » pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.« less
An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.
Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe
2014-03-01
The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.
Intelligent automated control of life support systems using proportional representations.
Wu, Annie S; Garibay, Ivan I
2004-06-01
Effective automatic control of Advanced Life Support Systems (ALSS) is a crucial component of space exploration. An ALSS is a coupled dynamical system which can be extremely sensitive and difficult to predict. As a result, such systems can be difficult to control using deliberative and deterministic methods. We investigate the performance of two machine learning algorithms, a genetic algorithm (GA) and a stochastic hill-climber (SH), on the problem of learning how to control an ALSS, and compare the impact of two different types of problem representations on the performance of both algorithms. We perform experiments on three ALSS optimization problems using five strategies with multiple variations of a proportional representation for a total of 120 experiments. Results indicate that although a proportional representation can effectively boost GA performance, it does not necessarily have the same effect on other algorithms such as SH. Results also support previous conclusions that multivector control strategies are an effective method for control of coupled dynamical systems.
NASA Astrophysics Data System (ADS)
Atanassov, E.; Dimitrov, D.; Gurov, T.
2015-10-01
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.
Stochastic optimization of GeantV code by use of genetic algorithms
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Stochastic optimization of GeantV code by use of genetic algorithms
NASA Astrophysics Data System (ADS)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
Stochastic optimization of GeantV code by use of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
NASA Astrophysics Data System (ADS)
Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.
2018-06-01
A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.
Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems
Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J.
2017-01-01
Abstract Motivation: Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. Results: In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ-leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. Availability and implementation: MATLAB code is available at Bioinformatics online. Contact: flassig@mpi-magdeburg.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881987
Xu, Zhijing; Zu, Zhenghu; Zheng, Tao; Zhang, Wendou; Xu, Qing; Liu, Jinjie
2014-01-01
The high incidence of emerging infectious diseases has highlighted the importance of effective immunization strategies, especially the stochastic algorithms based on local available network information. Present stochastic strategies are mainly evaluated based on classical network models, such as scale-free networks and small-world networks, and thus are insufficient. Three frequently referred stochastic immunization strategies-acquaintance immunization, community-bridge immunization, and ring vaccination-were analyzed in this work. The optimal immunization ratios for acquaintance immunization and community-bridge immunization strategies were investigated, and the effectiveness of these three strategies in controlling the spreading of epidemics were analyzed based on realistic social contact networks. The results show all the strategies have decreased the coverage of the epidemics compared to baseline scenario (no control measures). However the effectiveness of acquaintance immunization and community-bridge immunization are very limited, with acquaintance immunization slightly outperforming community-bridge immunization. Ring vaccination significantly outperforms acquaintance immunization and community-bridge immunization, and the sensitivity analysis shows it could be applied to controlling the epidemics with a wide infectivity spectrum. The effectiveness of several classical stochastic immunization strategies was evaluated based on realistic contact networks for the first time in this study. These results could have important significance for epidemic control research and practice.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments
NASA Technical Reports Server (NTRS)
McDowell, Mark
2008-01-01
An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent
Spatio-Temporal Process Variability in Watershed Scale Wetland Restoration Planning
NASA Astrophysics Data System (ADS)
Evenson, G. R.
2012-12-01
Watershed scale restoration decision making processes are increasingly informed by quantitative methodologies providing site-specific restoration recommendations - sometimes referred to as "systematic planning." The more advanced of these methodologies are characterized by a coupling of search algorithms and ecological models to discover restoration plans that optimize environmental outcomes. Yet while these methods have exhibited clear utility as decision support toolsets, they may be critiqued for flawed evaluations of spatio-temporally variable processes fundamental to watershed scale restoration. Hydrologic and non-hydrologic mediated process connectivity along with post-restoration habitat dynamics, for example, are commonly ignored yet known to appreciably affect restoration outcomes. This talk will present a methodology to evaluate such spatio-temporally complex processes in the production of watershed scale wetland restoration plans. Using the Tuscarawas Watershed in Eastern Ohio as a case study, a genetic algorithm will be coupled with the Soil and Water Assessment Tool (SWAT) to reveal optimal wetland restoration plans as measured by their capacity to maximize nutrient reductions. Then, a so-called "graphical" representation of the optimization problem will be implemented in-parallel to promote hydrologic and non-hydrologic mediated connectivity amongst existing wetlands and sites selected for restoration. Further, various search algorithm mechanisms will be discussed as a means of accounting for temporal complexities such as post-restoration habitat dynamics. Finally, generalized patterns of restoration plan optimality will be discussed as an alternative and possibly superior decision support toolset given the complexity and stochastic nature of spatio-temporal process variability.
GillesPy: A Python Package for Stochastic Model Building and Simulation.
Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R
2016-09-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.
GillesPy: A Python Package for Stochastic Model Building and Simulation
Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.
2017-01-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community. PMID:28630888
Stochastic quasi-Newton molecular simulations
NASA Astrophysics Data System (ADS)
Chau, C. D.; Sevink, G. J. A.; Fraaije, J. G. E. M.
2010-08-01
We report a new and efficient factorized algorithm for the determination of the adaptive compound mobility matrix B in a stochastic quasi-Newton method (S-QN) that does not require additional potential evaluations. For one-dimensional and two-dimensional test systems, we previously showed that S-QN gives rise to efficient configurational space sampling with good thermodynamic consistency [C. D. Chau, G. J. A. Sevink, and J. G. E. M. Fraaije, J. Chem. Phys. 128, 244110 (2008)10.1063/1.2943313]. Potential applications of S-QN are quite ambitious, and include structure optimization, analysis of correlations and automated extraction of cooperative modes. However, the potential can only be fully exploited if the computational and memory requirements of the original algorithm are significantly reduced. In this paper, we consider a factorized mobility matrix B=JJT and focus on the nontrivial fundamentals of an efficient algorithm for updating the noise multiplier J . The new algorithm requires O(n2) multiplications per time step instead of the O(n3) multiplications in the original scheme due to Choleski decomposition. In a recursive form, the update scheme circumvents matrix storage and enables limited-memory implementation, in the spirit of the well-known limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method, allowing for a further reduction of the computational effort to O(n) . We analyze in detail the performance of the factorized (FSU) and limited-memory (L-FSU) algorithms in terms of convergence and (multiscale) sampling, for an elementary but relevant system that involves multiple time and length scales. Finally, we use this analysis to formulate conditions for the simulation of the complex high-dimensional potential energy landscapes of interest.
NASA Astrophysics Data System (ADS)
Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi
2017-01-01
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deschaine, L.M.; Chalmers Univ. of Technology, Dept. of Physical Resources, Complex Systems Group, Goteborg
2008-07-01
The global impact to human health and the environment from large scale chemical / radionuclide releases is well documented. Examples are the wide spread release of radionuclides from the Chernobyl nuclear reactors, the mobilization of arsenic in Bangladesh, the formation of Environmental Protection Agencies in the United States, Canada and Europe, and the like. The fiscal costs of addressing and remediating these issues on a global scale are astronomical, but then so are the fiscal and human health costs of ignoring them. An integrated methodology for optimizing the response(s) to these issues is needed. This work addresses development of optimalmore » policy design for large scale, complex, environmental issues. It discusses the development, capabilities, and application of a hybrid system of algorithms that optimizes the environmental response. It is important to note that 'optimization' does not singularly refer to cost minimization, but to the effective and efficient balance of cost, performance, risk, management, and societal priorities along with uncertainty analysis. This tool integrates all of these elements into a single decision framework. It provides a consistent approach to designing optimal solutions that are tractable, traceable, and defensible. The system is modular and scalable. It can be applied either as individual components or in total. By developing the approach in a complex systems framework, a solution methodology represents a significant improvement over the non-optimal 'trial and error' approach to environmental response(s). Subsurface environmental processes are represented by linear and non-linear, elliptic and parabolic equations. The state equations solved using numerical methods include multi-phase flow (water, soil gas, NAPL), and multicomponent transport (radionuclides, heavy metals, volatile organics, explosives, etc.). Genetic programming is used to generate the simulators either when simulation models do not exist, or to extend the accuracy of them. The uncertainty and sparse nature of information in earth science simulations necessitate stochastic representations. For discussion purposes, the solution to these site-wide challenges is divided into three sub-components; plume finding, long term monitoring, and site-wide remediation. Plume finding is the optimal estimation of the plume fringe(s) at a specified time. It is optimized by fusing geo-stochastic flow and transport simulations with the information content of data using a Kalman filter. The result is an optimal monitoring sensor network; the decision variable is location(s) of sensor in three dimensions. Long term monitoring extends this approach concept, and integrates the spatial-time correlations to optimize the decision variables of where to sample and when to sample over the project life cycle. Optimization of location and timing of samples to meet the desired accuracy of temporal plume movement is accomplished using enumeration or genetic algorithms. The remediation optimization solves the multi-component, multiphase system of equations and incorporates constraints on life-cycle costs, maximum annual costs, maximum allowable annual discharge (for assessing the monitored natural attenuation solution) and constraints on where remedial system component(s) can be located, including management overrides to force certain solutions to be chosen are incorporated for solution design. It uses a suite of optimization techniques, including the outer approximation method, Lipchitz global optimization, genetic algorithms, and the like. The automated optimal remedial design algorithm requires a stable simulator be available for the simulated process. This is commonly the case for all above specifications sans true three-dimensional multiphase flow. Much work is currently being conducted in the industry to develop stable 3D, three-phase simulators. If needed, an interim heuristic algorithm is available to get close to optimal for these conditions. This system process provides the full capability to optimize multi-source, multiphase, and multicomponent sites. The results of applying just components of these algorithms have produced predicted savings of as much as $90,000,000(US), when compared to alternative solutions. Investment in a pilot program to test the model saved 100% of the $20,000,000 predicted for the smaller test implementation. This was done without loss of effectiveness, and received an award from the Vice President - and now Nobel peace prize winner - Al Gore of the United States. (authors)« less
Modelling Evolutionary Algorithms with Stochastic Differential Equations.
Heredia, Jorge Pérez
2017-11-20
There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.
Energy and operation management of a microgrid using particle swarm optimization
NASA Astrophysics Data System (ADS)
Radosavljević, Jordan; Jevtić, Miroljub; Klimenta, Dardan
2016-05-01
This article presents an efficient algorithm based on particle swarm optimization (PSO) for energy and operation management (EOM) of a microgrid including different distributed generation units and energy storage devices. The proposed approach employs PSO to minimize the total energy and operating cost of the microgrid via optimal adjustment of the control variables of the EOM, while satisfying various operating constraints. Owing to the stochastic nature of energy produced from renewable sources, i.e. wind turbines and photovoltaic systems, as well as load uncertainties and market prices, a probabilistic approach in the EOM is introduced. The proposed method is examined and tested on a typical grid-connected microgrid including fuel cell, gas-fired microturbine, wind turbine, photovoltaic and energy storage devices. The obtained results prove the efficiency of the proposed approach to solve the EOM of the microgrids.
The importance of environmental variability and management control error to optimal harvest policies
Hunter, C.M.; Runge, M.C.
2004-01-01
State-dependent strategies (SDSs) are the most general form of harvest policy because they allow the harvest rate to depend, without constraint, on the state of the system. State-dependent strategies that provide an optimal harvest rate for any system state can be calculated, and stochasticity can be appropriately accommodated in this optimization. Stochasticity poses 2 challenges to harvest policies: (1) the population will never be at the equilibrium state; and (2) stochasticity induces uncertainty about future states. We investigated the effects of 2 types of stochasticity, environmental variability and management control error, on SDS harvest policies for a white-tailed deer (Odocoileus virginianus) model, and contrasted these with a harvest policy based on maximum sustainable yield (MSY). Increasing stochasticity resulted in more conservative SDSs; that is, higher population densities were required to support the same harvest rate, but these effects were generally small. As stochastic effects increased, SDSs performed much better than MSY. Both deterministic and stochastic SDSs maintained maximum mean annual harvest yield (AHY) and optimal equilibrium population size (Neq) in a stochastic environment, whereas an MSY policy could not. We suggest 3 rules of thumb for harvest management of long-lived vertebrates in stochastic systems: (1) an SDS is advantageous over an MSY policy, (2) using an SDS rather than an MSY is more important than whether a deterministic or stochastic SDS is used, and (3) for SDSs, rankings of the variability in management outcomes (e.g., harvest yield) resulting from parameter stochasticity can be predicted by rankings of the deterministic elasticities.
Optimal Stochastic Modeling and Control of Flexible Structures
1988-09-01
1.37] and McLane [1.18] considered multivariable systems and derived their optimal control characteristics. Kleinman, Gorman and Zaborsky considered...Leondes [1.72,1.73] studied various aspects of multivariable linear stochastic, discrete-time systems that are partly deterministic, and partly stochastic...June 1966. 1.8. A.V. Balaknishnan, Applied Functional Analaysis , 2nd ed., New York, N.Y.: Springer-Verlag, 1981 1.9. Peter S. Maybeck, Stochastic
Optimal Control of Stochastic Systems Driven by Fractional Brownian Motions
2014-10-09
problems for stochastic partial differential equations driven by fractional Brownian motions are explicitly solved. For the control of a continuous time...linear systems with Brownian motion or a discrete time linear system with a white Gaussian noise and costs 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 stochastic optimal control, fractional Brownian motion , stochastic
NASA Astrophysics Data System (ADS)
Shi, Xizhi; He, Chaoyu; Pickard, Chris J.; Tang, Chao; Zhong, Jianxin
2018-01-01
A method is introduced to stochastically generate crystal structures with defined structural characteristics. Reasonable quotient graphs for symmetric crystals are constructed using a random strategy combined with space group and graph theory. Our algorithm enables the search for large-size and complex crystal structures with a specified connectivity, such as threefold sp2 carbons, fourfold sp3 carbons, as well as mixed sp2-sp3 carbons. To demonstrate the method, we randomly construct initial structures adhering to space groups from 75 to 230 and a range of lattice constants, and we identify 281 new sp3 carbon crystals. First-principles optimization of these structures show that most of them are dynamically and mechanically stable and are energetically comparable to those previously proposed. Some of the new structures can be considered as candidates to explain the experimental cold compression of graphite.
FERN - a Java framework for stochastic simulation and evaluation of reaction networks.
Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf
2008-08-29
Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.
Inversion of Robin coefficient by a spectral stochastic finite element approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin Bangti; Zou Jun
2008-03-01
This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
Optimal Real-time Dispatch for Integrated Energy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Firestone, Ryan Michael
This report describes the development and application of a dispatch optimization algorithm for integrated energy systems (IES) comprised of on-site cogeneration of heat and electricity, energy storage devices, and demand response opportunities. This work is intended to aid commercial and industrial sites in making use of modern computing power and optimization algorithms to make informed, near-optimal decisions under significant uncertainty and complex objective functions. The optimization algorithm uses a finite set of randomly generated future scenarios to approximate the true, stochastic future; constraints are included that prevent solutions to this approximate problem from deviating from solutions to the actual problem.more » The algorithm is then expressed as a mixed integer linear program, to which a powerful commercial solver is applied. A case study of United States Postal Service Processing and Distribution Centers (P&DC) in four cities and under three different electricity tariff structures is conducted to (1) determine the added value of optimal control to a cogeneration system over current, heuristic control strategies; (2) determine the value of limited electric load curtailment opportunities, with and without cogeneration; and (3) determine the trade-off between least-cost and least-carbon operations of a cogeneration system. Key results for the P&DC sites studied include (1) in locations where the average electricity and natural gas prices suggest a marginally profitable cogeneration system, optimal control can add up to 67% to the value of the cogeneration system; optimal control adds less value in locations where cogeneration is more clearly profitable; (2) optimal control under real-time pricing is (a) more complicated than under typical time-of-use tariffs and (b) at times necessary to make cogeneration economic at all; (3) limited electric load curtailment opportunities can be more valuable as a compliment to the cogeneration system than alone; and (4) most of the trade-off between least-cost and least-carbon IES is determined during the system design stage; for the IES system considered, there is little difference between least-cost control and least-carbon control.« less
A study of optimization techniques in HDR brachytherapy for the prostate
NASA Astrophysics Data System (ADS)
Pokharel, Ghana Shyam
Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was done to keep time window desirable for real time procedures. Therefore, it requires further study with improved conditions to realize the full potential of the algorithm.
A variable-gain output feedback control design methodology
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.
1989-01-01
A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.
Computational singular perturbation analysis of stochastic chemical systems with stiffness
Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; ...
2017-01-25
Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to notmore » only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. Furthermore, the algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.« less
Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease
NASA Astrophysics Data System (ADS)
Marsden, Alison
2009-11-01
Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
The Automation of Stochastization Algorithm with Use of SymPy Computer Algebra Library
NASA Astrophysics Data System (ADS)
Demidova, Anastasya; Gevorkyan, Migran; Kulyabov, Dmitry; Korolkova, Anna; Sevastianov, Leonid
2018-02-01
SymPy computer algebra library is used for automatic generation of ordinary and stochastic systems of differential equations from the schemes of kinetic interaction. Schemes of this type are used not only in chemical kinetics but also in biological, ecological and technical models. This paper describes the automatic generation algorithm with an emphasis on application details.
Vandenberg, Wim; Duwé, Sam; Leutenegger, Marcel; Moeyaert, Benjamien; Krajnik, Bartosz; Lasser, Theo; Dedecker, Peter
2016-01-01
Stochastic optical fluctuation imaging (SOFI) is a super-resolution fluorescence imaging technique that makes use of stochastic fluctuations in the emission of the fluorophores. During a SOFI measurement multiple fluorescence images are acquired from the sample, followed by the calculation of the spatiotemporal cumulants of the intensities observed at each position. Compared to other techniques, SOFI works well under conditions of low signal-to-noise, high background, or high emitter densities. However, it can be difficult to unambiguously determine the reliability of images produced by any superresolution imaging technique. In this work we present a strategy that enables the estimation of the variance or uncertainty associated with each pixel in the SOFI image. In addition to estimating the image quality or reliability, we show that this can be used to optimize the signal-to-noise ratio (SNR) of SOFI images by including multiple pixel combinations in the cumulant calculation. We present an algorithm to perform this optimization, which automatically takes all relevant instrumental, sample, and probe parameters into account. Depending on the optical magnification of the system, this strategy can be used to improve the SNR of a SOFI image by 40% to 90%. This gain in information is entirely free, in the sense that it does not require additional efforts or complications. Alternatively our approach can be applied to reduce the number of fluorescence images to meet a particular quality level by about 30% to 50%, strongly improving the temporal resolution of SOFI imaging. PMID:26977356
On Social Optima of Non-Cooperative Mean Field Games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Zhao, Lin
This paper studies the social optima in noncooperative mean-field games for a large population of agents with heterogeneous stochastic dynamic systems. Each agent seeks to maximize an individual utility functional, and utility functionals of different agents are coupled through a mean field term that depends on the mean of the population states/controls. The paper has the following contributions. First, we derive a set of control strategies for the agents that possess *-Nash equilibrium property, and converge to the mean-field Nash equilibrium as the population size goes to infinity. Second, we study the social optimal in the mean field game. Wemore » derive the conditions, termed the socially optimal conditions, under which the *-Nash equilibrium of the mean field game maximizes the social welfare. Third, a primal-dual algorithm is proposed to compute the *-Nash equilibrium of the mean field game. Since the *-Nash equilibrium of the mean field game is socially optimal, we can compute the equilibrium by solving the social welfare maximization problem, which can be addressed by a decentralized primal-dual algorithm. Numerical simulations are presented to demonstrate the effectiveness of the proposed approach.« less
Real-Time Control of an Ensemble of Heterogeneous Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Andrey; Bouman, Niek J.; Le Boudec, Jean-Yves
This paper focuses on the problem of controlling an ensemble of heterogeneous resources connected to an electrical grid at the same point of common coupling (PCC). The controller receives an aggregate power setpoint for the ensemble in real time and tracks this setpoint by issuing individual optimal setpoints to the resources. The resources can have continuous or discrete nature (e.g., heating systems consisting of a finite number of heaters that each can be either switched on or off) and/or can be highly uncertain (e.g., photovoltaic (PV) systems or residential loads). A naive approach would lead to a stochastic mixed-integer optimizationmore » problem to be solved at the controller at each time step, which might be infeasible in real time. Instead, we allow the controller to solve a continuous convex optimization problem and compensate for the errors at the resource level by using a variant of the well-known error diffusion algorithm. We give conditions guaranteeing that our algorithm tracks the power setpoint at the PCC on average while issuing optimal setpoints to individual resources. We illustrate the approach numerically by controlling a collection of batteries, PV systems, and discrete loads.« less
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-12-18
This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
Weighted SGD for ℓ p Regression with Randomized Preconditioning.
Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W
2016-01-01
In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems-e.g., ℓ 2 and ℓ 1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓ p regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓ p solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ 1 regression with size n by d , pwSGD returns an approximate solution with ε relative error in the objective value in (log n ·nnz( A )+poly( d )/ ε 2 ) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ 2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in (log n ·nnz( A )+poly( d ) log(1/ ε )/ ε ) time. We show that for unconstrained ℓ 2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε , high dimension n and low dimension d satisfy d ≥ 1/ ε and n ≥ d 2 / ε . We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10 -3 , more quickly.
Weighted SGD for ℓp Regression with Randomized Preconditioning*
Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.
2018-01-01
In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems—e.g., ℓ2 and ℓ1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓp regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓp solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ1 regression with size n by d, pwSGD returns an approximate solution with ε relative error in the objective value in 𝒪(log n·nnz(A)+poly(d)/ε2) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in 𝒪(log n·nnz(A)+poly(d) log(1/ε)/ε) time. We show that for unconstrained ℓ2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε, high dimension n and low dimension d satisfy d ≥ 1/ε and n ≥ d2/ε. We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10−3, more quickly. PMID:29782626
Identification and stochastic control of helicopter dynamic modes
NASA Technical Reports Server (NTRS)
Molusis, J. A.; Bar-Shalom, Y.
1983-01-01
A general treatment of parameter identification and stochastic control for use on helicopter dynamic systems is presented. Rotor dynamic models, including specific applications to rotor blade flapping and the helicopter ground resonance problem are emphasized. Dynamic systems which are governed by periodic coefficients as well as constant coefficient models are addressed. The dynamic systems are modeled by linear state variable equations which are used in the identification and stochastic control formulation. The pure identification problem as well as the stochastic control problem which includes combined identification and control for dynamic systems is addressed. The stochastic control problem includes the effect of parameter uncertainty on the solution and the concept of learning and how this is affected by the control's duel effect. The identification formulation requires algorithms suitable for on line use and thus recursive identification algorithms are considered. The applications presented use the recursive extended kalman filter for parameter identification which has excellent convergence for systems without process noise.
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.
Characterization of the probabilistic traveling salesman problem.
Bowler, Neill E; Fink, Thomas M A; Ball, Robin C
2003-09-01
We show that stochastic annealing can be successfully applied to gain new results on the probabilistic traveling salesman problem. The probabilistic "traveling salesman" must decide on an a priori order in which to visit n cities (randomly distributed over a unit square) before learning that some cities can be omitted. We find the optimized average length of the pruned tour follows E(L(pruned))=sqrt[np](0.872-0.105p)f(np), where p is the probability of a city needing to be visited, and f(np)-->1 as np--> infinity. The average length of the a priori tour (before omitting any cities) is found to follow E(L(a priori))=sqrt[n/p]beta(p), where beta(p)=1/[1.25-0.82 ln(p)] is measured for 0.05< or =p< or =0.6. Scaling arguments and indirect measurements suggest that beta(p) tends towards a constant for p<0.03. Our stochastic annealing algorithm is based on limited sampling of the pruned tour lengths, exploiting the sampling error to provide the analog of thermal fluctuations in simulated (thermal) annealing. The method has general application to the optimization of functions whose cost to evaluate rises with the precision required.
Stochastic modelling of microstructure formation in solidification processes
NASA Astrophysics Data System (ADS)
Nastac, Laurentiu; Stefanescu, Doru M.
1997-07-01
To relax many of the assumptions used in continuum approaches, a general stochastic model has been developed. The stochastic model can be used not only for an accurate description of the fraction of solid evolution, and therefore accurate cooling curves, but also for simulation of microstructure formation in castings. The advantage of using the stochastic approach is to give a time- and space-dependent description of solidification processes. Time- and space-dependent processes can also be described by partial differential equations. Unlike a differential formulation which, in most cases, has to be transformed into a difference equation and solved numerically, the stochastic approach is essentially a direct numerical algorithm. The stochastic model is comprehensive, since the competition between various phases is considered. Furthermore, grain impingement is directly included through the structure of the model. In the present research, all grain morphologies are simulated with this procedure. The relevance of the stochastic approach is that the simulated microstructures can be directly compared with microstructures obtained from experiments. The computer becomes a `dynamic metallographic microscope'. A comparison between deterministic and stochastic approaches has been performed. An important objective of this research was to answer the following general questions: (1) `Would fully deterministic approaches continue to be useful in solidification modelling?' and (2) `Would stochastic algorithms be capable of entirely replacing purely deterministic models?'
Expected p-values in light of an ROC curve analysis applied to optimal multiple testing procedures.
Vexler, Albert; Yu, Jihnhee; Zhao, Yang; Hutson, Alan D; Gurevich, Gregory
2017-01-01
Many statistical studies report p-values for inferential purposes. In several scenarios, the stochastic aspect of p-values is neglected, which may contribute to drawing wrong conclusions in real data experiments. The stochastic nature of p-values makes their use to examine the performance of given testing procedures or associations between investigated factors to be difficult. We turn our focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we prove that the EPV can be considered in the context of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. The ROC-based framework provides a new and efficient methodology for investigating and constructing statistical decision-making procedures, including: (1) evaluation and visualization of properties of the testing mechanisms, considering, e.g. partial EPVs; (2) developing optimal tests via the minimization of EPVs; (3) creation of novel methods for optimally combining multiple test statistics. We demonstrate that the proposed EPV-based approach allows us to maximize the integrated power of testing algorithms with respect to various significance levels. In an application, we use the proposed method to construct the optimal test and analyze a myocardial infarction disease dataset. We outline the usefulness of the "EPV/ROC" technique for evaluating different decision-making procedures, their constructions and properties with an eye towards practical applications.
Optimisation multi-objectif des systemes energetiques
NASA Astrophysics Data System (ADS)
Dipama, Jean
The increasing demand of energy and the environmental concerns related to greenhouse gas emissions lead to more and more private or public utilities to turn to nuclear energy as an alternative for the future. Nuclear power plants are then called to experience large expansion in the coming years. Improved technologies will then be put in place to support the development of these plants. This thesis considers the optimization of the thermodynamic cycle of the secondary loop of Gentilly-2 nuclear power plant in terms of output power and thermal efficiency. In this thesis, investigations are carried out to determine the optimal operating conditions of steam power cycles by the judicious use of the combination of steam extraction at the different stages of the turbines. Whether it is the case of superheating or regeneration, we are confronted in all cases to an optimization problem involving two conflicting objectives, as increasing the efficiency imply the decrease of mechanical work and vice versa. Solving this kind of problem does not lead to unique solution, but to a set of solutions that are tradeoffs between the conflicting objectives. To search all of these solutions, called Pareto optimal solutions, the use of an appropriate optimization algorithm is required. Before starting the optimization of the secondary loop, we developed a thermodynamic model of the secondary loop which includes models for the main thermal components (e.g., turbine, moisture separator-superheater, condenser, feedwater heater and deaerator). This model is used to calculate the thermodynamic state of the steam and water at the different points of the installation. The thermodynamic model has been developed with Matlab and validated by comparing its predictions with the operating data provided by the engineers of the power plant. The optimizer developed in VBA (Visual Basic for Applications) uses an optimization algorithm based on the principle of genetic algorithms, a stochastic optimization method which is very robust and widely used to solve problems usually difficult to handle by traditional methods. Genetic algorithms (GAs) have been used in previous research and proved to be efficient in optimizing heat exchangers networks (HEN) (Dipama et al., 2008). So, HEN have been synthesized to recover the maximum heat in an industrial process. The optimization problem formulated in the context of this work consists of a single objective, namely the maximization of energy recovery. The optimization algorithm developed in this thesis extends the ability of GAs by taking into account several objectives simultaneously. This algorithm provides an innovation in the method of finding optimal solutions, by using a technique which consist of partitioning the solutions space in the form of parallel grids called "watching corridors". These corridors permit to specify areas (the observation corridors) in which the most promising feasible solutions are found and used to guide the search towards optimal solutions. A measure of the progress of the search is incorporated into the optimization algorithm to make it self-adaptive through the use of appropriate genetic operators at each stage of optimization process. The proposed method allows a fast convergence and ensure a diversity of solutions. Moreover, this method gives the algorithm the ability to overcome difficulties associated with optimizing problems with complex Pareto front landscapes (e.g., discontinuity, disjunction, etc.). The multi-objective optimization algorithm has been first validated using numerical test problems found in the literature as well as energy systems optimization problems. Finally, the proposed optimization algorithm has been applied for the optimization of the secondary loop of Gentilly-2 nuclear power plant, and a set of solutions have been found which permit to make the power plant operate in optimal conditions. (Abstract shortened by UMI.)
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is developed. The theoretical underpinnings of the TSMC method and the flow of the algorithm are explained. Its performance is compared to other existing methods for financial option valuation. In the third, and final, problem, TSMC method is used to determine the conditions of feasibility for hybrid electric vehicles and fuel cell vehicles. There are many uncertainties related to the technologies and markets associated with new generation passenger vehicles. These uncertainties are analyzed in order to determine the conditions in which new generation vehicles can compete with established technologies.
A Multialgorithm Approach to Land Surface Modeling of Suspended Sediment in the Colorado Front Range
Stewart, J. R.; Kasprzyk, J. R.; Rajagopalan, B.; Minear, J. T.; Raseman, W. J.
2017-01-01
Abstract A new paradigm of simulating suspended sediment load (SSL) with a Land Surface Model (LSM) is presented here. Five erosion and SSL algorithms were applied within a common LSM framework to quantify uncertainties and evaluate predictability in two steep, forested catchments (>1,000 km2). The algorithms were chosen from among widely used sediment models, including empirically based: monovariate rating curve (MRC) and the Modified Universal Soil Loss Equation (MUSLE); stochastically based: the Load Estimator (LOADEST); conceptually based: the Hydrologic Simulation Program—Fortran (HSPF); and physically based: the Distributed Hydrology Soil Vegetation Model (DHSVM). The algorithms were driven by the hydrologic fluxes and meteorological inputs generated from the Variable Infiltration Capacity (VIC) LSM. A multiobjective calibration was applied to each algorithm and optimized parameter sets were validated over an excluded period, as well as in a transfer experiment to a nearby catchment to explore parameter robustness. Algorithm performance showed consistent decreases when parameter sets were applied to periods with greatly differing SSL variability relative to the calibration period. Of interest was a joint calibration of all sediment algorithm and streamflow parameters simultaneously, from which trade‐offs between streamflow performance and partitioning of runoff and base flow to optimize SSL timing were noted, decreasing the flexibility and robustness of the streamflow to adapt to different time periods. Parameter transferability to another catchment was most successful in more process‐oriented algorithms, the HSPF and the DHSVM. This first‐of‐its‐kind multialgorithm sediment scheme offers a unique capability to portray acute episodic loading while quantifying trade‐offs and uncertainties across a range of algorithm structures. PMID:29399268
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.
Barber, Jared; Tanase, Roxana; Yotov, Ivan
2016-06-01
Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. Copyright © 2016 Elsevier Inc. All rights reserved.
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051
Towards sub-optimal stochastic control of partially observable stochastic systems
NASA Technical Reports Server (NTRS)
Ruzicka, G. J.
1980-01-01
A class of multidimensional stochastic control problems with noisy data and bounded controls encountered in aerospace design is examined. The emphasis is on suboptimal design, the optimality being taken in quadratic mean sense. To that effect the problem is viewed as a stochastic version of the Lurie problem known from nonlinear control theory. The main result is a separation theorem (involving a nonlinear Kalman-like filter) suitable for Lurie-type approximations. The theorem allows for discontinuous characteristics. As a byproduct the existence of strong solutions to a class of non-Lipschitzian stochastic differential equations in dimensions is proven.
Application of Simulated Annealing and Related Algorithms to TWTA Design
NASA Technical Reports Server (NTRS)
Radke, Eric M.
2004-01-01
Simulated Annealing (SA) is a stochastic optimization algorithm used to search for global minima in complex design surfaces where exhaustive searches are not computationally feasible. The algorithm is derived by simulating the annealing process, whereby a solid is heated to a liquid state and then cooled slowly to reach thermodynamic equilibrium at each temperature. The idea is that atoms in the solid continually bond and re-bond at various quantum energy levels, and with sufficient cooling time they will rearrange at the minimum energy state to form a perfect crystal. The distribution of energy levels is given by the Boltzmann distribution: as temperature drops, the probability of the presence of high-energy bonds decreases. In searching for an optimal design, local minima and discontinuities are often present in a design surface. SA presents a distinct advantage over other optimization algorithms in its ability to escape from these local minima. Just as high-energy atomic configurations are visited in the actual annealing process in order to eventually reach the minimum energy state, in SA highly non-optimal configurations are visited in order to find otherwise inaccessible global minima. The SA algorithm produces a Markov chain of points in the design space at each temperature, with a monotonically decreasing temperature. A random point is started upon, and the objective function is evaluated at that point. A stochastic perturbation is then made to the parameters of the point to arrive at a proposed new point in the design space, at which the objection function is evaluated as well. If the change in objective function values (Delta)E is negative, the proposed new point is accepted. If (Delta)E is positive, the proposed new point is accepted according to the Metropolis criterion: rho((Delta)f) = exp((-Delta)E/T), where T is the temperature for the current Markov chain. The process then repeats for the remainder of the Markov chain, after which the temperature is decremented and the process repeats. Eventually (and hopefully), a near-globally optimal solution is attained as T approaches zero. Several exciting variants of SA have recently emerged, including Discrete-State Simulated Annealing (DSSA) and Simulated Tempering (ST). The DSSA algorithm takes the thermodynamic analogy one step further by categorizing objective function evaluations into discrete states. In doing so, many of the case-specific problems associated with fine-tuning the SA algorithm can be avoided; for example, theoretical approximations for the initial and final temperature can be derived independently of the case. In this manner, DSSA provides a scheme that is more robust with respect to widely differing design surfaces. ST differs from SA in that the temperature T becomes an additional random variable in the optimization. The system is also kept in equilibrium as the temperature changes, as opposed to the system being driven out of equilibrium as temperature changes in SA. ST is designed to overcome obstacles in design surfaces where numerous local minima are separated by high barriers. These algorithms are incorporated into the optimal design of the traveling-wave tube amplifier (TWTA). The area under scrutiny is the collector, in which it would be ideal to use negative potential to decelerate the spent electron beam to zero kinetic energy just as it reaches the collector surface. In reality this is not plausible due to a number of physical limitations, including repulsion and differing levels of kinetic energy among individual electrons. Instead, the collector is designed with multiple stages depressed below ground potential. The design of this multiple-stage collector is the optimization problem of interest. One remaining problem in SA and DSSA is the difficulty in determining when equilibrium has been reached so that the current Markov chain can be terminated. It has been suggested in recent literature that simulating the thermodynamic properties opecific heat, entropy, and internal energy from the Boltzmann distribution can provide good indicators of having reached equilibrium at a certain temperature. These properties are tested for their efficacy and implemented in SA and DSSA code with respect to TWTA collector optimization.
NASA Astrophysics Data System (ADS)
Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.
2016-12-01
The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
SimBA: simulation algorithm to fit extant-population distributions.
Parida, Laxmi; Haiminen, Niina
2015-03-14
Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .
Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai
2016-01-01
This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209
Water supply pipe dimensioning using hydraulic power dissipation
NASA Astrophysics Data System (ADS)
Sreemathy, J. R.; Rashmi, G.; Suribabu, C. R.
2017-07-01
Proper sizing of the pipe component of water distribution networks play an important role in the overall design of the any water supply system. Several approaches have been applied for the design of networks from an economical point of view. Traditional optimization techniques and population based stochastic algorithms are widely used to optimize the networks. But the use of these approaches is mostly found to be limited to the research level due to difficulties in understanding by the practicing engineers, design engineers and consulting firms. More over due to non-availability of commercial software related to the optimal design of water distribution system,it forces the practicing engineers to adopt either trial and error or experience-based design. This paper presents a simple approach based on power dissipation in each pipeline as a parameter to design the network economically, but not to the level of global minimum cost.
Parameter optimization for transitions between memory states in small arrays of Josephson junctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rezac, Jacob D.; Imam, Neena; Braiman, Yehuda
Coupled arrays of Josephson junctions possess multiple stable zero voltage states. Such states can store information and consequently can be utilized for cryogenic memory applications. Basic memory operations can be implemented by sending a pulse to one of the junctions and studying transitions between the states. In order to be suitable for memory operations, such transitions between the states have to be fast and energy efficient. Here in this article we employed simulated annealing, a stochastic optimization algorithm, to study parameter optimization of array parameters which minimizes times and energies of transitions between specifically chosen states that can be utilizedmore » for memory operations (Read, Write, and Reset). Simulation results show that such transitions occur with access times on the order of 10–100 ps and access energies on the order of 10 -19–5×10 -18 J. Numerical simulations are validated with approximate analytical results.« less
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
Solving multiconstraint assignment problems using learning automata.
Horn, Geir; Oommen, B John
2010-02-01
This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.
Two new algorithms to combine kriging with stochastic modelling
NASA Astrophysics Data System (ADS)
Venema, Victor; Lindau, Ralf; Varnai, Tamas; Simmer, Clemens
2010-05-01
Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated driven by such a kriged field. Stochastic modelling aims at reproducing the statistical structure of the data in space and time. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. While stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. This requires the use of so-called constrained stochastic models. Because radiative transfer through clouds is a highly nonlinear process, it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately. In addition, the correlations within the cloud field are important, especially because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. Up to now, however, we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero during successive iterations. A second algorithm, which we call step-wise kriging, pursues the same aim. Each time the kriging algorithm estimates a value, noise is added to it, after which this new point is accounted for in the estimation of all the later points. In this way, the autocorrelation of the step-krigged field is close to that found in the pseudo measurements. The amount of noise is determined by the kriging uncertainty. The algorithms are tested on cloud fields from large eddy simulations (LES). On these clouds, a measurement is simulated. From these pseudo-measurements, we estimated the power spectrum for the surrogates, the semi-variogram for the (stepwise) kriging and the distribution. Furthermore, the pseudo-measurement is kriged. Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the two new types of stochastic clouds. For comparison, also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results show that both algorithms reproduce the structure of the original clouds well, and the minima and maxima are located where the pseudo-measurements see them. The main problem for the quality of the structure and the root mean square error is the amount of data, which is especially very limited in case of just one zenith pointing measurement.
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
Simulation of anaerobic digestion processes using stochastic algorithm.
Palanichamy, Jegathambal; Palani, Sundarambal
2014-01-01
The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less
Two-stage fuzzy-stochastic robust programming: a hybrid model for regional air quality management.
Li, Yongping; Huang, Guo H; Veawab, Amornvadee; Nie, Xianghui; Liu, Lei
2006-08-01
In this study, a hybrid two-stage fuzzy-stochastic robust programming (TFSRP) model is developed and applied to the planning of an air-quality management system. As an extension of existing fuzzy-robust programming and two-stage stochastic programming methods, the TFSRP can explicitly address complexities and uncertainties of the study system without unrealistic simplifications. Uncertain parameters can be expressed as probability density and/or fuzzy membership functions, such that robustness of the optimization efforts can be enhanced. Moreover, economic penalties as corrective measures against any infeasibilities arising from the uncertainties are taken into account. This method can, thus, provide a linkage to predefined policies determined by authorities that have to be respected when a modeling effort is undertaken. In its solution algorithm, the fuzzy decision space can be delimited through specification of the uncertainties using dimensional enlargement of the original fuzzy constraints. The developed model is applied to a case study of regional air quality management. The results indicate that reasonable solutions have been obtained. The solutions can be used for further generating pollution-mitigation alternatives with minimized system costs and for providing a more solid support for sound environmental decisions.
Automatic motor task selection via a bandit algorithm for a brain-controlled button
NASA Astrophysics Data System (ADS)
Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen
2013-02-01
Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.
NASA Astrophysics Data System (ADS)
Gaillot, P.; Bardaine, T.; Lyon-Caen, H.
2004-12-01
Since recent years, various automatic phase pickers based on the wavelet transform have been developed. The main motivation for using wavelet transform is that they are excellent at finding the characteristics of transient signals, they have good time resolution at all periods, and they are easy to program for fast execution. Thus, the time-scale properties and flexibility of the wavelets allow detection of P and S phases in a broad frequency range making their utilization possible in various context. However, the direct application of an automatic picking program in a different context/network than the one for which it has been initially developed is quickly tedious. In fact, independently of the strategy involved in automatic picking algorithms (window average, autoregressive, beamforming, optimization filtering, neuronal network), all developed algorithms use different parameters that depend on the objective of the seismological study, the region and the seismological network. Classically, these parameters are manually defined by trial-error or calibrated learning stage. In order to facilitate this laborious process, we have developed an automated method that provide optimal parameters for the picking programs. The set of parameters can be explored using simulated annealing which is a generic name for a family of optimization algorithms based on the principle of stochastic relaxation. The optimization process amounts to systematically modifying an initial realization so as to decrease the value of the objective function, getting the realization acceptably close to the target statistics. Different formulations of the optimization problem (objective function) are discussed using (1) world seismicity data recorded by the French national seismic monitoring network (ReNass), (2) regional seismicity data recorded in the framework of the Corinth Rift Laboratory (CRL) experiment, (3) induced seismicity data from the gas field of Lacq (Western Pyrenees), and (4) micro-seismicity data from glacier monitoring. The developed method is discussed and tested using our wavelet version of the standard STA-LTA algorithm.
Li, Tiejun; Min, Bin; Wang, Zhiming
2013-03-14
The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.
Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems.
Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J
2017-07-15
Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ -leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. MATLAB code is available at Bioinformatics online. flassig@mpi-magdeburg.mpg.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Algorithmic detectability threshold of the stochastic block model
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Algebraic methods for the solution of some linear matrix equations
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1979-01-01
The characterization of polynomials whose zeros lie in certain algebraic domains (and the unification of the ideas of Hermite and Lyapunov) is the basis for developing finite algorithms for the solution of linear matrix equations. Particular attention is given to equations PA + A'P = Q (the Lyapunov equation) and P - A'PA = Q the (discrete Lyapunov equation). The Lyapunov equation appears in several areas of control theory such as stability theory, optimal control (evaluation of quadratic integrals), stochastic control (evaluation of covariance matrices) and in the solution of the algebraic Riccati equation using Newton's method.
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
Heuristic rules embedded genetic algorithm for in-core fuel management optimization
NASA Astrophysics Data System (ADS)
Alim, Fatih
The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.
Energy management and cooperation in microgrids
NASA Astrophysics Data System (ADS)
Rahbar, Katayoun
Microgrids are key components of future smart power grids, which integrate distributed renewable energy generators to efficiently serve the load demand locally. However, random and intermittent characteristics of renewable energy generations may hinder the reliable operation of microgrids. This thesis is thus devoted to investigating new strategies for microgrids to optimally manage their energy consumption, energy storage system (ESS) and cooperation in real time to achieve the reliable and cost-effective operation. This thesis starts with a single microgrid system. The optimal energy scheduling and ESS management policy is derived to minimize the energy cost of the microgrid resulting from drawing conventional energy from the main grid under both the off-line and online setups, where the renewable energy generation/load demand are assumed to be non-causally known and causally known at the microgrid, respectively. The proposed online algorithm is designed based on the optimal off-line solution and works under arbitrary (even unknown) realizations of future renewable energy generation/load demand. Therefore, it is more practically applicable as compared to solutions based on conventional techniques such as dynamic programming and stochastic programming that require the prior knowledge of renewable energy generation and load demand realizations/distributions. Next, for a group of microgrids that cooperate in energy management, we study efficient methods for sharing energy among them for both fully and partially cooperative scenarios, where microgrids are of common interests and self-interested, respectively. For the fully cooperative energy management, the off-line optimization problem is first formulated and optimally solved, where a distributed algorithm is proposed to minimize the total (sum) energy cost of microgrids. Inspired by the results obtained from the off-line optimization, efficient online algorithms are proposed for the real-time energy management, which are of low complexity and work given arbitrary realizations of renewable energy generation/load demand. On the other hand, for self-interested microgrids, the partially cooperative energy management is formulated and a distributed algorithm is proposed to optimize the energy cooperation such that energy costs of individual microgrids reduce simultaneously over the case without energy cooperation while limited information is shared among the microgrids and the central controller.
Efficient protocols for Stirling heat engines at the micro-scale
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo; Schwieger, Kay
2015-10-01
We investigate the thermodynamic efficiency of sub-micro-scale Stirling heat engines operating under the conditions described by overdamped stochastic thermodynamics. We show how to construct optimal protocols such that at maximum power the efficiency attains for constant isotropic mobility the universal law η=2 ηC/(4-ηC) , where ηC is the efficiency of an ideal Carnot cycle. We show that these protocols are specified by the solution of an optimal mass transport problem. Such solution can be determined explicitly using well-known Monge-Ampère-Kantorovich reconstruction algorithms. Furthermore, we show that the same law describes the efficiency of heat engines operating at maximum work over short time periods. Finally, we illustrate the straightforward extension of these results to cases when the mobility is anisotropic and temperature dependent.
Optimal design of earth-moving machine elements with cusp catastrophe theory application
NASA Astrophysics Data System (ADS)
Pitukhin, A. V.; Skobtsov, I. G.
2017-10-01
This paper deals with the optimal design problem solution for the operator of an earth-moving machine with a roll-over protective structure (ROPS) in terms of the catastrophe theory. A brief description of the catastrophe theory is presented, the cusp catastrophe is considered, control parameters are viewed as Gaussian stochastic quantities in the first part of the paper. The statement of optimal design problem is given in the second part of the paper. It includes the choice of the objective function and independent design variables, establishment of system limits. The objective function is determined as mean total cost that includes initial cost and cost of failure according to the cusp catastrophe probability. Algorithm of random search method with an interval reduction subject to side and functional constraints is given in the last part of the paper. The way of optimal design problem solution can be applied to choose rational ROPS parameters, which will increase safety and reduce production and exploitation expenses.
NASA Astrophysics Data System (ADS)
Friedel, Michael; Buscema, Massimo
2016-04-01
Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.
Automatic Design of Digital Synthetic Gene Circuits
Marchisio, Mario A.; Stelling, Jörg
2011-01-01
De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input–output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions. PMID:21399700
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias
A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and uppermore » bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.« less
Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
NASA Astrophysics Data System (ADS)
Ubaidulla, P.; Chockalingam, A.
2009-12-01
We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
Optimal estimation of parameters and states in stochastic time-varying systems with time delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-08-01
In this study estimation of parameters and states in stochastic linear and nonlinear delay differential systems with time-varying coefficients and constant delay is explored. The approach consists of first employing a continuous time approximation to approximate the stochastic delay differential equation with a set of stochastic ordinary differential equations. Then the problem of parameter estimation in the resulting stochastic differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the resulting system, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states.
Blocked inverted indices for exact clustering of large chemical spaces.
Thiel, Philipp; Sach-Peltason, Lisa; Ottmann, Christian; Kohlbacher, Oliver
2014-09-22
The calculation of pairwise compound similarities based on fingerprints is one of the fundamental tasks in chemoinformatics. Methods for efficient calculation of compound similarities are of the utmost importance for various applications like similarity searching or library clustering. With the increasing size of public compound databases, exact clustering of these databases is desirable, but often computationally prohibitively expensive. We present an optimized inverted index algorithm for the calculation of all pairwise similarities on 2D fingerprints of a given data set. In contrast to other algorithms, it neither requires GPU computing nor yields a stochastic approximation of the clustering. The algorithm has been designed to work well with multicore architectures and shows excellent parallel speedup. As an application example of this algorithm, we implemented a deterministic clustering application, which has been designed to decompose virtual libraries comprising tens of millions of compounds in a short time on current hardware. Our results show that our implementation achieves more than 400 million Tanimoto similarity calculations per second on a common desktop CPU. Deterministic clustering of the available chemical space thus can be done on modern multicore machines within a few days.
Optimizing Constrained Single Period Problem under Random Fuzzy Demand
NASA Astrophysics Data System (ADS)
Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin
2008-09-01
In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie
2014-04-15
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less
A Comparison of Techniques for Scheduling Earth-Observing Satellites
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2004-01-01
Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
Stochastic Routing and Scheduling Policies for Energy Harvesting Communication Networks
NASA Astrophysics Data System (ADS)
Calvo-Fullana, Miguel; Anton-Haro, Carles; Matamoros, Javier; Ribeiro, Alejandro
2018-07-01
In this paper, we study the joint routing-scheduling problem in energy harvesting communication networks. Our policies, which are based on stochastic subgradient methods on the dual domain, act as an energy harvesting variant of the stochastic family of backpresure algorithms. Specifically, we propose two policies: (i) the Stochastic Backpressure with Energy Harvesting (SBP-EH), in which a node's routing-scheduling decisions are determined by the difference between the Lagrange multipliers associated to their queue stability constraints and their neighbors'; and (ii) the Stochastic Soft Backpressure with Energy Harvesting (SSBP-EH), an improved algorithm where the routing-scheduling decision is of a probabilistic nature. For both policies, we show that given sustainable data and energy arrival rates, the stability of the data queues over all network nodes is guaranteed. Numerical results corroborate the stability guarantees and illustrate the minimal gap in performance that our policies offer with respect to classical ones which work with an unlimited energy supply.
Stochastic Multiscale Analysis and Design of Engine Disks
2010-07-28
shown recently to fail when used with data-driven non-linear stochastic input models (KPCA, IsoMap, etc.). Need for scalable exascale computing algorithms Materials Process Design and Control Laboratory Cornell University
Parallel, stochastic measurement of molecular surface area.
Juba, Derek; Varshney, Amitabh
2008-08-01
Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.
General Results in Optimal Control of Discrete-Time Nonlinear Stochastic Systems
1988-01-01
P. J. McLane, "Optimal Stochastic Control of Linear System. with State- and Control-Dependent Distur- bances," ZEEE Trans. 4uto. Contr., Vol. 16, No...Vol. 45, No. 1, pp. 359-362, 1987 (9] R. R. Mohler and W. J. Kolodziej, "An Overview of Stochastic Bilinear Control Processes," ZEEE Trans. Syst...34 J. of Math. anal. App.:, Vol. 47, pp. 156-161, 1974 [14) E. Yaz, "A Control Scheme for a Class of Discrete Nonlinear Stochastic Systems," ZEEE Trans
Stochastic Local Search for Core Membership Checking in Hedonic Games
NASA Astrophysics Data System (ADS)
Keinänen, Helena
Hedonic games have emerged as an important tool in economics and show promise as a useful formalism to model multi-agent coalition formation in AI as well as group formation in social networks. We consider a coNP-complete problem of core membership checking in hedonic coalition formation games. No previous algorithms to tackle the problem have been presented. In this work, we overcome this by developing two stochastic local search algorithms for core membership checking in hedonic games. We demonstrate the usefulness of the algorithms by showing experimentally that they find solutions efficiently, particularly for large agent societies.
Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2014-03-01
The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.
Low Frequency Predictive Skill Despite Structural Instability and Model Error
2014-09-30
Majda, based on earlier theoretical work. 1. Dynamic Stochastic Superresolution of sparseley observed turbulent systems M. Branicki (Post doc...of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by...resolving subgridscale turbulence through Dynamic Stochastic Superresolution utilizing aliased grids is a potential breakthrough for practical online
Empirical method to measure stochasticity and multifractality in nonlinear time series
NASA Astrophysics Data System (ADS)
Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping
2013-12-01
An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.
Reserve selection with land market feedbacks.
Butsic, Van; Lewis, David J; Radeloff, Volker C
2013-01-15
How to best site reserves is a leading question for conservation biologists. Recently, reserve selection has emphasized efficient conservation: maximizing conservation goals given the reality of limited conservation budgets, and this work indicates that land market can potentially undermine the conservation benefits of reserves by increasing property values and development probabilities near reserves. Here we propose a reserve selection methodology which optimizes conservation given both a budget constraint and land market feedbacks by using a combination of econometric models along with stochastic dynamic programming. We show that amenity based feedbacks can be accounted for in optimal reserve selection by choosing property price and land development models which exogenously estimate the effects of reserve establishment. In our empirical example, we use previously estimated models of land development and property prices to select parcels to maximize coarse woody debris along 16 lakes in Vilas County, WI, USA. Using each lake as an independent experiment, we find that including land market feedbacks in the reserve selection algorithm has only small effects on conservation efficacy. Likewise, we find that in our setting heuristic (minloss and maxgain) algorithms perform nearly as well as the optimal selection strategy. We emphasize that land market feedbacks can be included in optimal reserve selection; the extent to which this improves reserve placement will likely vary across landscapes. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ramos, José A.; Mercère, Guillaume
2016-12-01
In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks
Vestergaard, Christian L.; Génois, Mathieu
2015-01-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860
Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.
Vestergaard, Christian L; Génois, Mathieu
2015-10-01
Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.
Rabotyagov, Sergey; Campbell, Todd; Valcu, Adriana; Gassman, Philip; Jha, Manoj; Schilling, Keith; Wolter, Calvin; Kling, Catherine
2012-12-09
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,(5,12,20)) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods(3,4,9,10,13-15,17-19,22,23,25). In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model(7) with a multiobjective evolutionary algorithm SPEA2(26), and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Wallace, Meredith L; Anderson, Stewart J; Mazumdar, Sati
2010-12-20
Missing covariate data present a challenge to tree-structured methodology due to the fact that a single tree model, as opposed to an estimated parameter value, may be desired for use in a clinical setting. To address this problem, we suggest a multiple imputation algorithm that adds draws of stochastic error to a tree-based single imputation method presented by Conversano and Siciliano (Technical Report, University of Naples, 2003). Unlike previously proposed techniques for accommodating missing covariate data in tree-structured analyses, our methodology allows the modeling of complex and nonlinear covariate structures while still resulting in a single tree model. We perform a simulation study to evaluate our stochastic multiple imputation algorithm when covariate data are missing at random and compare it to other currently used methods. Our algorithm is advantageous for identifying the true underlying covariate structure when complex data and larger percentages of missing covariate observations are present. It is competitive with other current methods with respect to prediction accuracy. To illustrate our algorithm, we create a tree-structured survival model for predicting time to treatment response in older, depressed adults. Copyright © 2010 John Wiley & Sons, Ltd.