A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
Interactive two-stage stochastic fuzzy programming for water resources management.
Wang, S; Huang, G H
2011-08-01
In this study, an interactive two-stage stochastic fuzzy programming (ITSFP) approach has been developed through incorporating an interactive fuzzy resolution (IFR) method within an inexact two-stage stochastic programming (ITSP) framework. ITSFP can not only tackle dual uncertainties presented as fuzzy boundary intervals that exist in the objective function and the left- and right-hand sides of constraints, but also permit in-depth analyses of various policy scenarios that are associated with different levels of economic penalties when the promised policy targets are violated. A management problem in terms of water resources allocation has been studied to illustrate applicability of the proposed approach. The results indicate that a set of solutions under different feasibility degrees has been generated for planning the water resources allocation. They can help the decision makers (DMs) to conduct in-depth analyses of tradeoffs between economic efficiency and constraint-violation risk, as well as enable them to identify, in an interactive way, a desired compromise between satisfaction degree of the goal and feasibility of the constraints (i.e., risk of constraint violation). Copyright © 2011 Elsevier Ltd. All rights reserved.
Investment portfolio of a pension fund: Stochastic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosch-Princep, M.; Fontanals-Albiol, H.
1994-12-31
This paper presents a stochastic programming model that aims at getting the optimal investment portfolio of a Pension Funds. The model has been designed bearing in mind the liabilities of the Funds to its members. The essential characteristic of the objective function and the constraints is the randomness of the coefficients and the right hand side of the constraints, so it`s necessary to use techniques of stochastic mathematical programming to get information about the amount of money that should be assigned to each sort of investment. It`s important to know the risky attitude of the person that has to takemore » decisions towards running risks. It incorporates the relation between the different coefficients of the objective function and constraints of each period of temporal horizon, through lineal and discrete random processes. Likewise, it includes the hypotheses that are related to Spanish law concerning the subject of Pension Funds.« less
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Xiao; Dong, Jin; Djouadi, Seddik M
2015-01-01
The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, wheremore » the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.« less
Strategic planning for disaster recovery with stochastic last mile distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell Whitford; Van Hentenryck, Pascal; Coffrin, Carleton
2010-01-01
This paper considers the single commodity allocation problem (SCAP) for disaster recovery, a fundamental problem faced by all populated areas. SCAPs are complex stochastic optimization problems that combine resource allocation, warehouse routing, and parallel fleet routing. Moreover, these problems must be solved under tight runtime constraints to be practical in real-world disaster situations. This paper formalizes the specification of SCAPs and introduces a novel multi-stage hybrid-optimization algorithm that utilizes the strengths of mixed integer programming, constraint programming, and large neighborhood search. The algorithm was validated on hurricane disaster scenarios generated by Los Alamos National Laboratory using state-of-the-art disaster simulation toolsmore » and is deployed to aid federal organizations in the US.« less
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.
Multi-hazard evacuation route and shelter planning for buildings.
DOT National Transportation Integrated Search
2014-06-01
A bi-level, two-stage, binary stochastic program with equilibrium constraints, and three variants, are presented that : support the planning and design of shelters and exits, along with hallway fortification strategies and associated : evacuation pat...
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
Water resources planning and management : A stochastic dual dynamic programming approach
NASA Astrophysics Data System (ADS)
Goor, Q.; Pinte, D.; Tilmant, A.
2008-12-01
Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.
Yu Wei; Michael Bevers; Erin Belval; Benjamin Bird
2015-01-01
This research developed a chance-constrained two-stage stochastic programming model to support wildfire initial attack resource acquisition and location on a planning unit for a fire season. Fire growth constraints account for the interaction between fire perimeter growth and construction to prevent overestimation of resource requirements. We used this model to examine...
Diffusion Processes Satisfying a Conservation Law Constraint
Bakosi, J.; Ristorcelli, J. R.
2014-03-04
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
Diffusion Processes Satisfying a Conservation Law Constraint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakosi, J.; Ristorcelli, J. R.
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun
2014-01-01
This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation. PMID:24757433
Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun
2014-01-01
This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation.
Enhancements and Algorithms for Avionic Information Processing System Design Methodology.
1982-06-16
programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
A stochastic equilibrium model for the North American natural gas market
NASA Astrophysics Data System (ADS)
Zhuang, Jifang
This dissertation is an endeavor in the field of energy modeling for the North American natural gas market using a mixed complementarity formulation combined with the stochastic programming. The genesis of the stochastic equilibrium model presented in this dissertation is the deterministic market equilibrium model developed in [Gabriel, Kiet and Zhuang, 2005]. Based on some improvements that we made to this model, including proving new existence and uniqueness results, we present a multistage stochastic equilibrium model with uncertain demand for the deregulated North American natural gas market using the recourse method of the stochastic programming. The market participants considered by the model are pipeline operators, producers, storage operators, peak gas operators, marketers and consumers. Pipeline operators are described with regulated tariffs but also involve "congestion pricing" as a mechanism to allocate scarce pipeline capacity. Marketers are modeled as Nash-Cournot players in sales to the residential and commercial sectors but price-takers in all other aspects. Consumers are represented by demand functions in the marketers' problem. Producers, storage operators and peak gas operators are price-takers consistent with perfect competition. Also, two types of the natural gas markets are included: the long-term and spot markets. Market participants make both high-level planning decisions (first-stage decisions) in the long-term market and daily operational decisions (recourse decisions) in the spot market subject to their engineering, resource and political constraints, resource constraints as well as market constraints on both the demand and the supply side, so as to simultaneously maximize their expected profits given others' decisions. The model is shown to be an instance of a mixed complementarity problem (MiCP) under minor conditions. The MiCP formulation is derived from applying the Karush-Kuhn-Tucker optimality conditions of the optimization problems faced by the market participants. Some theoretical results regarding the market prices in both markets are shown. We also illustrate the model on a representative, sample network of two production nodes, two consumption nodes with discretely distributed end-user demand and three seasons using four cases.
Multi-Objective Programming for Lot-Sizing with Quantity Discount
NASA Astrophysics Data System (ADS)
Kang, He-Yau; Lee, Amy H. I.; Lai, Chun-Mei; Kang, Mei-Sung
2011-11-01
Multi-objective programming (MOP) is one of the popular methods for decision making in a complex environment. In a MOP, decision makers try to optimize two or more objectives simultaneously under various constraints. A complete optimal solution seldom exists, and a Pareto-optimal solution is usually used. Some methods, such as the weighting method which assigns priorities to the objectives and sets aspiration levels for the objectives, are used to derive a compromise solution. The ɛ-constraint method is a modified weight method. One of the objective functions is optimized while the other objective functions are treated as constraints and are incorporated in the constraint part of the model. This research considers a stochastic lot-sizing problem with multi-suppliers and quantity discounts. The model is transformed into a mixed integer programming (MIP) model next based on the ɛ-constraint method. An illustrative example is used to illustrate the practicality of the proposed model. The results demonstrate that the model is an effective and accurate tool for determining the replenishment of a manufacturer from multiple suppliers for multi-periods.
NASA Astrophysics Data System (ADS)
Cardoso, T.; Oliveira, M. D.; Barbosa-Póvoa, A.; Nickel, S.
2015-05-01
Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob A.
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASA's Dawn mission. The Dawn trajectory was designed with the DDP-based Static/Dynamic Optimal Control algorithm used in the Mystic software.1 Another recently developed method, Hybrid Differential Dynamic Programming (HDDP),2, 3 is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
SLFP: a stochastic linear fractional programming approach for sustainable waste management.
Zhu, H; Huang, G H
2011-12-01
A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Pricing of swing options: A Monte Carlo simulation approach
NASA Astrophysics Data System (ADS)
Leow, Kai-Siong
We study the problem of pricing swing options, a class of multiple early exercise options that are traded in energy market, particularly in the electricity and natural gas markets. These contracts permit the option holder to periodically exercise the right to trade a variable amount of energy with a counterparty, subject to local volumetric constraints. In addition, the total amount of energy traded from settlement to expiration with the counterparty is restricted by a global volumetric constraint. Violation of this global volumetric constraint is allowed but would lead to penalty settled at expiration. The pricing problem is formulated as a stochastic optimal control problem in discrete time and state space. We present a stochastic dynamic programming algorithm which is based on piecewise linear concave approximation of value functions. This algorithm yields the value of the swing option under the assumption that the optimal exercise policy is applied by the option holder. We present a proof of an almost sure convergence that the algorithm generates the optimal exercise strategy as the number of iterations approaches to infinity. Finally, we provide a numerical example for pricing a natural gas swing call option.
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
Two-stage fuzzy-stochastic robust programming: a hybrid model for regional air quality management.
Li, Yongping; Huang, Guo H; Veawab, Amornvadee; Nie, Xianghui; Liu, Lei
2006-08-01
In this study, a hybrid two-stage fuzzy-stochastic robust programming (TFSRP) model is developed and applied to the planning of an air-quality management system. As an extension of existing fuzzy-robust programming and two-stage stochastic programming methods, the TFSRP can explicitly address complexities and uncertainties of the study system without unrealistic simplifications. Uncertain parameters can be expressed as probability density and/or fuzzy membership functions, such that robustness of the optimization efforts can be enhanced. Moreover, economic penalties as corrective measures against any infeasibilities arising from the uncertainties are taken into account. This method can, thus, provide a linkage to predefined policies determined by authorities that have to be respected when a modeling effort is undertaken. In its solution algorithm, the fuzzy decision space can be delimited through specification of the uncertainties using dimensional enlargement of the original fuzzy constraints. The developed model is applied to a case study of regional air quality management. The results indicate that reasonable solutions have been obtained. The solutions can be used for further generating pollution-mitigation alternatives with minimized system costs and for providing a more solid support for sound environmental decisions.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui
Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less
Solving multistage stochastic programming models of portfolio selection with outstanding liabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, C.
1994-12-31
Models for portfolio selection in the presence of an outstanding liability have received significant attention, for example, models for pricing options. The problem may be described briefly as follows: given a set of risky securities (and a riskless security such as a bond), and given a set of cash flows, i.e., outstanding liability, to be met at some future date, determine an initial portfolio and a dynamic trading strategy for the underlying securities such that the initial cost of the portfolio is within a prescribed wealth level and the expected cash surpluses arising from trading is maximized. While the tradingmore » strategy should be self-financing, there may also be other restrictions such as leverage and short-sale constraints. Usually the treatment is limited to binomial evolution of uncertainty (of stock price), with possible extensions for developing computational bounds for multinomial generalizations. Posing as stochastic programming models of decision making, we investigate alternative efficient solution procedures under continuous evolution of uncertainty, for discrete time economies. We point out an important moment problem arising in the portfolio selection problem, the solution (or bounds) on which provides the basis for developing efficient computational algorithms. While the underlying stochastic program may be computationally tedious even for a modest number of trading opportunities (i.e., time periods), the derived algorithms may used to solve problems whose sizes are beyond those considered within stochastic optimization.« less
Optimizing Multi-Product Multi-Constraint Inventory Control Systems with Stochastic Replenishments
NASA Astrophysics Data System (ADS)
Allah Taleizadeh, Ata; Aryanezhad, Mir-Bahador; Niaki, Seyed Taghi Akhavan
Multi-periodic inventory control problems are mainly studied employing two assumptions. The first is the continuous review, where depending on the inventory level orders can happen at any time and the other is the periodic review, where orders can only happen at the beginning of each period. In this study, we relax these assumptions and assume that the periodic replenishments are stochastic in nature. Furthermore, we assume that the periods between two replenishments are independent and identically random variables. For the problem at hand, the decision variables are of integer-type and there are two kinds of space and service level constraints for each product. We develop a model of the problem in which a combination of back-order and lost-sales are considered for the shortages. Then, we show that the model is of an integer-nonlinear-programming type and in order to solve it, a search algorithm can be utilized. We employ a simulated annealing approach and provide a numerical example to demonstrate the applicability of the proposed methodology.
Optimization for Service Routes of Pallet Service Center Based on the Pallet Pool Mode
He, Shiwei; Song, Rui
2016-01-01
Service routes optimization (SRO) of pallet service center should meet customers' demand firstly and then, through the reasonable method of lines organization, realize the shortest path of vehicle driving. The routes optimization of pallet service center is similar to the distribution problems of vehicle routing problem (VRP) and Chinese postman problem (CPP), but it has its own characteristics. Based on the relevant research results, the conditions of determining the number of vehicles, the one way of the route, the constraints of loading, and time windows are fully considered, and a chance constrained programming model with stochastic constraints is constructed taking the shortest path of all vehicles for a delivering (recycling) operation as an objective. For the characteristics of the model, a hybrid intelligent algorithm including stochastic simulation, neural network, and immune clonal algorithm is designed to solve the model. Finally, the validity and rationality of the optimization model and algorithm are verified by the case. PMID:27528865
Alvarado, Michelle; Ntaimo, Lewis
2018-03-01
Oncology clinics are often burdened with scheduling large volumes of cancer patients for chemotherapy treatments under limited resources such as the number of nurses and chairs. These cancer patients require a series of appointments over several weeks or months and the timing of these appointments is critical to the treatment's effectiveness. Additionally, the appointment duration, the acuity levels of each appointment, and the availability of clinic nurses are uncertain. The timing constraints, stochastic parameters, rising treatment costs, and increased demand of outpatient oncology clinic services motivate the need for efficient appointment schedules and clinic operations. In this paper, we develop three mean-risk stochastic integer programming (SIP) models, referred to as SIP-CHEMO, for the problem of scheduling individual chemotherapy patient appointments and resources. These mean-risk models are presented and an algorithm is devised to improve computational speed. Computational results were conducted using a simulation model and results indicate that the risk-averse SIP-CHEMO model with the expected excess mean-risk measure can decrease patient waiting times and nurse overtime when compared to deterministic scheduling algorithms by 42 % and 27 %, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novikov, V.
1991-05-01
The U.S. Army's detailed equipment decontamination process is a stochastic flow shop which has N independent non-identical jobs (vehicles) which have overlapping processing times. This flow shop consists of up to six non-identical machines (stations). With the exception of one station, the processing times of the jobs are random variables. Based on an analysis of the processing times, the jobs for the 56 Army heavy division companies were scheduled according to the best shortest expected processing time - longest expected processing time (SEPT-LEPT) sequence. To assist in this scheduling the Gap Comparison Heuristic was developed to select the best SEPT-LEPTmore » schedule. This schedule was then used in balancing the detailed equipment decon line in order to find the best possible site configuration subject to several constraints. The detailed troop decon line, in which all jobs are independent and identically distributed, was then balanced. Lastly, an NBC decon optimization computer program was developed using the scheduling and line balancing results. This program serves as a prototype module for the ANBACIS automated NBC decision support system.... Decontamination, Stochastic flow shop, Scheduling, Stochastic scheduling, Minimization of the makespan, SEPT-LEPT Sequences, Flow shop line balancing, ANBACIS.« less
Treatment of constraints in the stochastic quantization method and covariantized Langevin equation
NASA Astrophysics Data System (ADS)
Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji
1993-04-01
We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.
Chen, Xiujuan; Huang, Guohe; Zhao, Shan; Cheng, Guanhui; Wu, Yinghui; Zhu, Hua
2017-11-01
In this study, a stochastic fractional inventory-theory-based waste management planning (SFIWP) model was developed and applied for supporting long-term planning of the municipal solid waste (MSW) management in Xiamen City, the special economic zone of Fujian Province, China. In the SFIWP model, the techniques of inventory model, stochastic linear fractional programming, and mixed-integer linear programming were integrated in a framework. Issues of waste inventory in MSW management system were solved, and the system efficiency was maximized through considering maximum net-diverted wastes under various constraint-violation risks. Decision alternatives for waste allocation and capacity expansion were also provided for MSW management planning in Xiamen. The obtained results showed that about 4.24 × 10 6 t of waste would be diverted from landfills when p i is 0.01, which accounted for 93% of waste in Xiamen City, and the waste diversion per unit of cost would be 26.327 × 10 3 t per $10 6 . The capacities of MSW management facilities including incinerators, composting facility, and landfills would be expanded due to increasing waste generation rate.
Chance-Constrained Guidance With Non-Convex Constraints
NASA Technical Reports Server (NTRS)
Ono, Masahiro
2011-01-01
Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.
NASA Astrophysics Data System (ADS)
Wang, Sai; Wang, Yi-Fan; Huang, Qing-Guo; Li, Tjonnie G. F.
2018-05-01
Advanced LIGO's discovery of gravitational-wave events is stimulating extensive studies on the origin of binary black holes. Assuming that the gravitational-wave events can be explained by binary primordial black hole mergers, we utilize the upper limits on the stochastic gravitational-wave background given by Advanced LIGO as a new observational window to independently constrain the abundance of primordial black holes in dark matter. We show that Advanced LIGO's first observation run gives the best constraint on the primordial black hole abundance in the mass range 1 M⊙≲MPBH≲100 M⊙, pushing the previous microlensing and dwarf galaxy dynamics constraints tighter by 1 order of magnitude. Moreover, we discuss the possibility to detect the stochastic gravitational-wave background from primordial black holes, in particular from subsolar mass primordial black holes, by Advanced LIGO in the near future.
Wang, Sai; Wang, Yi-Fan; Huang, Qing-Guo; Li, Tjonnie G F
2018-05-11
Advanced LIGO's discovery of gravitational-wave events is stimulating extensive studies on the origin of binary black holes. Assuming that the gravitational-wave events can be explained by binary primordial black hole mergers, we utilize the upper limits on the stochastic gravitational-wave background given by Advanced LIGO as a new observational window to independently constrain the abundance of primordial black holes in dark matter. We show that Advanced LIGO's first observation run gives the best constraint on the primordial black hole abundance in the mass range 1M_{⊙}≲M_{PBH}≲100M_{⊙}, pushing the previous microlensing and dwarf galaxy dynamics constraints tighter by 1 order of magnitude. Moreover, we discuss the possibility to detect the stochastic gravitational-wave background from primordial black holes, in particular from subsolar mass primordial black holes, by Advanced LIGO in the near future.
Maximum principle for a stochastic delayed system involving terminal state constraints.
Wen, Jiaqiang; Shi, Yufeng
2017-01-01
We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
Stochastic Growth Theory of Type 3 Solar Radio Emission
NASA Technical Reports Server (NTRS)
Robinson, P. A.; Carins, I. H.
1993-01-01
The recently developed stochastic growth theory of type 3 radio sources is extended to predict their electromagnetic volume emissivities and brightness temperatures. Predicted emissivities are consistent with spacecraft observations and independent theoretical constraints.
Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa
2013-05-10
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Representing and computing regular languages on massively parallel networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M.I.; O'Sullivan, J.A.; Boysam, B.
1991-01-01
This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochasticmore » diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.« less
Option pricing, stochastic volatility, singular dynamics and constrained path integrals
NASA Astrophysics Data System (ADS)
Contreras, Mauricio; Hojman, Sergio A.
2014-01-01
Stochastic volatility models have been widely studied and used in the financial world. The Heston model (Heston, 1993) [7] is one of the best known models to deal with this issue. These stochastic volatility models are characterized by the fact that they explicitly depend on a correlation parameter ρ which relates the two Brownian motions that drive the stochastic dynamics associated to the volatility and the underlying asset. Solutions to the Heston model in the context of option pricing, using a path integral approach, are found in Lemmens et al. (2008) [21] while in Baaquie (2007,1997) [12,13] propagators for different stochastic volatility models are constructed. In all previous cases, the propagator is not defined for extreme cases ρ=±1. It is therefore necessary to obtain a solution for these extreme cases and also to understand the origin of the divergence of the propagator. In this paper we study in detail a general class of stochastic volatility models for extreme values ρ=±1 and show that in these two cases, the associated classical dynamics corresponds to a system with second class constraints, which must be dealt with using Dirac’s method for constrained systems (Dirac, 1958,1967) [22,23] in order to properly obtain the propagator in the form of a Euclidean Hamiltonian path integral (Henneaux and Teitelboim, 1992) [25]. After integrating over momenta, one gets an Euclidean Lagrangian path integral without constraints, which in the case of the Heston model corresponds to a path integral of a repulsive radial harmonic oscillator. In all the cases studied, the price of the underlying asset is completely determined by one of the second class constraints in terms of volatility and plays no active role in the path integral.
Analysis of stability for stochastic delay integro-differential equations.
Zhang, Yu; Li, Longsuo
2018-01-01
In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yuping; Zheng, Qipeng P.; Wang, Jianhui
2014-11-01
tThis paper presents a two-stage stochastic unit commitment (UC) model, which integrates non-generation resources such as demand response (DR) and energy storage (ES) while including riskconstraints to balance between cost and system reliability due to the fluctuation of variable genera-tion such as wind and solar power. This paper uses conditional value-at-risk (CVaR) measures to modelrisks associated with the decisions in a stochastic environment. In contrast to chance-constrained modelsrequiring extra binary variables, risk constraints based on CVaR only involve linear constraints and con-tinuous variables, making it more computationally attractive. The proposed models with risk constraintsare able to avoid over-conservative solutions butmore » still ensure system reliability represented by loss ofloads. Then numerical experiments are conducted to study the effects of non-generation resources ongenerator schedules and the difference of total expected generation costs with risk consideration. Sen-sitivity analysis based on reliability parameters is also performed to test the decision preferences ofconfidence levels and load-shedding loss allowances on generation cost reduction.« less
A robust optimisation approach to the problem of supplier selection and allocation in outsourcing
NASA Astrophysics Data System (ADS)
Fu, Yelin; Keung Lai, Kin; Liang, Liang
2016-03-01
We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.
Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints
NASA Astrophysics Data System (ADS)
Yan, Wei
2012-01-01
An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.
Modelling biochemical reaction systems by stochastic differential equations with reflection.
Niu, Yuanling; Burrage, Kevin; Chen, Luonan
2016-05-07
In this paper, we gave a new framework for modelling and simulating biochemical reaction systems by stochastic differential equations with reflection not in a heuristic way but in a mathematical way. The model is computationally efficient compared with the discrete-state Markov chain approach, and it ensures that both analytic and numerical solutions remain in a biologically plausible region. Specifically, our model mathematically ensures that species numbers lie in the domain D, which is a physical constraint for biochemical reactions, in contrast to the previous models. The domain D is actually obtained according to the structure of the corresponding chemical Langevin equations, i.e., the boundary is inherent in the biochemical reaction system. A variant of projection method was employed to solve the reflected stochastic differential equation model, and it includes three simple steps, i.e., Euler-Maruyama method was applied to the equations first, and then check whether or not the point lies within the domain D, and if not perform an orthogonal projection. It is found that the projection onto the closure D¯ is the solution to a convex quadratic programming problem. Thus, existing methods for the convex quadratic programming problem can be employed for the orthogonal projection map. Numerical tests on several important problems in biological systems confirmed the efficiency and accuracy of this approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pavement maintenance optimization model using Markov Decision Processes
NASA Astrophysics Data System (ADS)
Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.
2017-09-01
This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.
Optimizing Constrained Single Period Problem under Random Fuzzy Demand
NASA Astrophysics Data System (ADS)
Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin
2008-09-01
In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.
Dynamics of non-holonomic systems with stochastic transport
NASA Astrophysics Data System (ADS)
Holm, D. D.; Putkaradze, V.
2018-01-01
This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.
An Anatomically Constrained, Stochastic Model of Eye Movement Control in Reading
ERIC Educational Resources Information Center
McDonald, Scott A.; Carpenter, R. H. S.; Shillcock, Richard C.
2005-01-01
This article presents SERIF, a new model of eye movement control in reading that integrates an established stochastic model of saccade latencies (LATER; R. H. S. Carpenter, 1981) with a fundamental anatomical constraint on reading: the vertically split fovea and the initial projection of information in either visual field to the contralateral…
Zolfaghari, Mohammad R; Peyghaleh, Elnaz
2015-03-01
This article presents a new methodology to implement the concept of equity in regional earthquake risk mitigation programs using an optimization framework. It presents a framework that could be used by decisionmakers (government and authorities) to structure budget allocation strategy toward different seismic risk mitigation measures, i.e., structural retrofitting for different building structural types in different locations and planning horizons. A two-stage stochastic model is developed here to seek optimal mitigation measures based on minimizing mitigation expenditures, reconstruction expenditures, and especially large losses in highly seismically active countries. To consider fairness in the distribution of financial resources among different groups of people, the equity concept is incorporated using constraints in model formulation. These constraints limit inequity to the user-defined level to achieve the equity-efficiency tradeoff in the decision-making process. To present practical application of the proposed model, it is applied to a pilot area in Tehran, the capital city of Iran. Building stocks, structural vulnerability functions, and regional seismic hazard characteristics are incorporated to compile a probabilistic seismic risk model for the pilot area. Results illustrate the variation of mitigation expenditures by location and structural type for buildings. These expenditures are sensitive to the amount of available budget and equity consideration for the constant risk aversion. Most significantly, equity is more easily achieved if the budget is unlimited. Conversely, increasing equity where the budget is limited decreases the efficiency. The risk-return tradeoff, equity-reconstruction expenditures tradeoff, and variation of per-capita expected earthquake loss in different income classes are also presented. © 2015 Society for Risk Analysis.
Effective stochastic generator with site-dependent interactions
NASA Astrophysics Data System (ADS)
Khamehchi, Masoumeh; Jafarpour, Farhad H.
2017-11-01
It is known that the stochastic generators of effective processes associated with the unconditioned dynamics of rare events might consist of non-local interactions; however, it can be shown that there are special cases for which these generators can include local interactions. In this paper, we investigate this possibility by considering systems of classical particles moving on a one-dimensional lattice with open boundaries. The particles might have hard-core interactions similar to the particles in an exclusion process, or there can be many arbitrary particles at a single site in a zero-range process. Assuming that the interactions in the original process are local and site-independent, we will show that under certain constraints on the microscopic reaction rules, the stochastic generator of an unconditioned process can be local but site-dependent. As two examples, the asymmetric zero-temperature Glauber model and the A-model with diffusion are presented and studied under the above-mentioned constraints.
Online learning in optical tomography: a stochastic approach
NASA Astrophysics Data System (ADS)
Chen, Ke; Li, Qin; Liu, Jian-Guo
2018-07-01
We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes
Dobos, László; Király, András; Abonyi, János
2012-01-01
Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298
2003-04-01
N (1) j =1 would lead to effective cold-clutter mitigation within the output snapshot ikt, ie. eir irl - lbJO 2 IMQ kt = T RoJi~ + j - 1 2 R2 + ...I 4k-2, t j i k-.., t ] (10) Note that the particular parameters used- in [9, 2] to simulate HF scattering from the sea K=2, bo= 1, b6 =-1.9359, b2=0.998...the construction of R,+I and R,c+ 2. The system of r stochastic constraints corresponding to Wk - j , t k.lt zik- j , t for j = l,..., . (12) may then be
SPIKE: AI scheduling techniques for Hubble Space Telescope
NASA Astrophysics Data System (ADS)
Johnston, Mark D.
1991-09-01
AI (Artificial Intelligence) scheduling techniques for HST are presented in the form of the viewgraphs. The following subject areas are covered: domain; HST constraint timescales; HTS scheduling; SPIKE overview; SPIKE architecture; constraint representation and reasoning; use of suitability functions by scheduling agent; SPIKE screen example; advantages of suitability function framework; limiting search and constraint propagation; scheduling search; stochastic search; repair methods; implementation; and status.
Portable parallel portfolio optimization in the Aurora Financial Management System
NASA Astrophysics Data System (ADS)
Laure, Erwin; Moritsch, Hans
2001-07-01
Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.
De Lara, M; Martinet, V
2009-02-01
Managing natural resources in a sustainable way is a hard task, due to uncertainties, dynamics and conflicting objectives (ecological, social, and economical). We propose a stochastic viability approach to address such problems. We consider a discrete-time control dynamical model with uncertainties, representing a bioeconomic system. The sustainability of this system is described by a set of constraints, defined in practice by indicators - namely, state, control and uncertainty functions - together with thresholds. This approach aims at identifying decision rules such that a set of constraints, representing various objectives, is respected with maximal probability. Under appropriate monotonicity properties of dynamics and constraints, having economic and biological content, we characterize an optimal feedback. The connection is made between this approach and the so-called Management Strategy Evaluation for fisheries. A numerical application to sustainable management of Bay of Biscay nephrops-hakes mixed fishery is given.
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
Using neutral models to identify constraints on low-severity fire regimes.
Donald McKenzie; Amy E. Hessl; Lara-Karena B. Kellogg
2006-01-01
Climate, topography, fuel loadings, and human activities all affect spatial and temporal patterns of fire occurrence. Because fire is modeled as a stochastic process, for which each fire history is only one realization, a simulation approach is necessary to understand baseline variability, thereby identifying constraints, or forcing functions, that affect fire regimes...
NASA Astrophysics Data System (ADS)
Olivares, M. A.; Gonzalez Cabrera, J. M., Sr.; Moreno, R.
2016-12-01
Operation of hydropower reservoirs in Chile is prescribed by an Independent Power System Operator. This study proposes a methodology that integrates power grid operations planning with basin-scale multi-use reservoir operations planning. The aim is to efficiently manage a multi-purpose reservoir, in which hydroelectric generation is competing with other water uses, most notably irrigation. Hydropower and irrigation are competing water uses due to a seasonality mismatch. Currently, the operation of multi-purpose reservoirs with substantial power capacity is prescribed as the result of a grid-wide cost-minimization model which takes irrigation requirements as constraints. We propose advancing in the economic co-optimization of reservoir water use for irrigation and hydropower at the basin level, by explicitly introducing the economic value of water for irrigation represented by a demand function for irrigation water. The proposed methodology uses the solution of a long-term grid-wide operations planning model, a stochastic dual dynamic program (SDDP), to obtain the marginal benefit function for water use in hydropower. This marginal benefit corresponds to the energy price in the power grid as a function of the water availability in the reservoir and the hydrologic scenarios. This function allows capture technical and economic aspects to the operation of hydropower reservoir in the power grid and is generated with the dual variable of the power-balance constraint, the optimal reservoir operation and the hydrologic scenarios used in SDDP. The economic value of water for irrigation and hydropower are then integrated into a basin scale stochastic dynamic program, from which stored water value functions are derived. These value functions are then used to re-optimize reservoir operations under several inflow scenarios.
Stochastic population dynamics under resource constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavane, Ajinkya S., E-mail: ajinkyagavane@gmail.com; Nigam, Rahul, E-mail: rahul.nigam@hyderabad.bits-pilani.ac.in
This paper investigates the population growth of a certain species in which every generation reproduces thrice over a period of predefined time, under certain constraints of resources needed for survival of population. We study the survival period of a species by randomizing the reproduction probabilities within a window at same predefined ages and the resources are being produced by the working force of the population at a variable rate. This randomness in the reproduction rate makes the population growth stochastic in nature and one cannot predict the exact form of evolution. Hence we study the growth by running simulations formore » such a population and taking an ensemble averaged over 500 to 5000 such simulations as per the need. While the population reproduces in a stochastic manner, we have implemented a constraint on the amount of resources available for the population. This is important to make the simulations more realistic. The rate of resource production then is tuned to find the rate which suits the survival of the species. We also compute the mean life time of the species corresponding to different resource production rate. Study for these outcomes in the parameter space defined by the reproduction probabilities and rate of resource production is carried out.« less
libSRES: a C library for stochastic ranking evolution strategy for parameter estimation.
Ji, Xinglai; Xu, Ying
2006-01-01
Estimation of kinetic parameters in a biochemical pathway or network represents a common problem in systems studies of biological processes. We have implemented a C library, named libSRES, to facilitate a fast implementation of computer software for study of non-linear biochemical pathways. This library implements a (mu, lambda)-ES evolutionary optimization algorithm that uses stochastic ranking as the constraint handling technique. Considering the amount of computing time it might require to solve a parameter-estimation problem, an MPI version of libSRES is provided for parallel implementation, as well as a simple user interface. libSRES is freely available and could be used directly in any C program as a library function. We have extensively tested the performance of libSRES on various pathway parameter-estimation problems and found its performance to be satisfactory. The source code (in C) is free for academic users at http://csbl.bmb.uga.edu/~jix/science/libSRES/
Online Appointment Scheduling for a Nuclear Medicine Department in a Chinese Hospital
Feng, Ya-bing
2018-01-01
Nuclear medicine, a subspecialty of radiology, plays an important role in proper diagnosis and timely treatment. Multiple resources, especially short-lived radiopharmaceuticals involved in the process of nuclear medical examination, constitute a unique problem in appointment scheduling. Aiming at achieving scientific and reasonable appointment scheduling in the West China Hospital (WCH), a typical class A tertiary hospital in China, we developed an online appointment scheduling algorithm based on an offline nonlinear integer programming model which considers multiresources allocation, the time window constraints imposed by short-lived radiopharmaceuticals, and the stochastic nature of the patient requests when scheduling patients. A series of experiments are conducted to show the effectiveness of the proposed strategy based on data provided by the WCH. The results show that the examination amount increases by 29.76% compared with the current one with a significant increase in the resource utilization and timely rate. Besides, it also has a high stability for stochastic factors and bears the advantage of convenient and economic operation. PMID:29849748
On the decentralized control of large-scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chong, C.
1973-01-01
The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.
Plasma Equilibrium in a Magnetic Field with Stochastic Field-Line Trajectories
NASA Astrophysics Data System (ADS)
Krommes, J. A.; Reiman, A. H.
2008-11-01
The nature of plasma equilibrium in a magnetic field with stochastic field lines is examined, expanding upon the ideas first described by Reiman et al. The magnetic partial differential equation (PDE) that determines the equilibrium Pfirsch-Schlüter currents is treated as a passive stochastic PDE for μj/B. Renormalization leads to a stochastic Langevin equation for μ in which the resonances at the rational surfaces are broadened by the stochastic diffusion of the field lines; even weak radial diffusion can significantly affect the equilibrium, which need not be flattened in the stochastic region. Particular attention is paid to satisfying the periodicity constraints in toroidal configurations with sheared magnetic fields. A numerical scheme that couples the renormalized Langevin equation to Ampere's law is described. A. Reiman et al, Nucl. Fusion 47, 572--8 (2007). J. A. Krommes, Phys. Reports 360, 1--351.
Lei, Xiaohui; Wang, Chao; Yue, Dong; Xie, Xiangpeng
2017-01-01
Since wind power is integrated into the thermal power operation system, dynamic economic emission dispatch (DEED) has become a new challenge due to its uncertain characteristics. This paper proposes an adaptive grid based multi-objective Cauchy differential evolution (AGB-MOCDE) for solving stochastic DEED with wind power uncertainty. To properly deal with wind power uncertainty, some scenarios are generated to simulate those possible situations by dividing the uncertainty domain into different intervals, the probability of each interval can be calculated using the cumulative distribution function, and a stochastic DEED model can be formulated under different scenarios. For enhancing the optimization efficiency, Cauchy mutation operation is utilized to improve differential evolution by adjusting the population diversity during the population evolution process, and an adaptive grid is constructed for retaining diversity distribution of Pareto front. With consideration of large number of generated scenarios, the reduction mechanism is carried out to decrease the scenarios number with covariance relationships, which can greatly decrease the computational complexity. Moreover, the constraint-handling technique is also utilized to deal with the system load balance while considering transmission loss among thermal units and wind farms, all the constraint limits can be satisfied under the permitted accuracy. After the proposed method is simulated on three test systems, the obtained results reveal that in comparison with other alternatives, the proposed AGB-MOCDE can optimize the DEED problem while handling all constraint limits, and the optimal scheme of stochastic DEED can decrease the conservation of interval optimization, which can provide a more valuable optimal scheme for real-world applications. PMID:28961262
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.; ...
2017-02-23
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu
2017-03-27
A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).
Optimization of Stochastic Response Surfaces Subject to Constraints with Linear Programming
1992-03-01
SEXTPT(EPDIM,NVAR), box(STEP,NVAR), SDEV(3) REAL BA-SET(NL,BCDIM,M,NVAR),BA(M,NVAR),CBA(NVAR) REAL CB(M), BMAT (MM),B _TEST(M) COMMON OPTBASIS, OPTEP...0.0) THEN COUNT - COUNT+1 XBASIC(N,SET,COUNT) = I DO 136 J - 1, M BMAT (J,COUNT) = A(J,I) 136 CONTINUE ENDIF 137 CONTINUE IF(COUNT.GT.M) WRITE...SET,I)= 0.0 DO 140 J = 1, M BMAT (J,I) = 0.0 140 CONTINUE 142 CONTINUE DO 148 I= 1, M BTEST(I) = 0.0 64 DO 146 J -1, NVAR BTEST(I)= BTEST(I)+XSOL(J)*A
Partial ASL extensions for stochastic programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gay, David
2010-03-31
partially completed extensions for stochastic programming to the AMPL/solver interface library (ASL).modeling and experimenting with stochastic recourse problems. This software is not primarily for military applications
NASA Astrophysics Data System (ADS)
Arfawi Kurdhi, Nughthoh; Adi Diwiryo, Toray; Sutanto
2016-02-01
This paper presents an integrated single-vendor two-buyer production-inventory model with stochastic demand and service level constraints. Shortage is permitted in the model, and partial backordered partial lost sale. The lead time demand is assumed follows a normal distribution and the lead time can be reduced by adding crashing cost. The lead time and ordering cost reductions are interdependent with logaritmic function relationship. A service level constraint policy corresponding to each buyer is considered in the model in order to limit the level of inventory shortages. The purpose of this research is to minimize joint total cost inventory model by finding the optimal order quantity, safety stock, lead time, and the number of lots delivered in one production run. The optimal production-inventory policy gained by the Lagrange method is shaped to account for the service level restrictions. Finally, a numerical example and effects of the key parameters are performed to illustrate the results of the proposed model.
Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang
2018-03-01
This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.
Exact lower and upper bounds on stationary moments in stochastic biochemical systems
NASA Astrophysics Data System (ADS)
Ghusinga, Khem Raj; Vargas-Garcia, Cesar A.; Lamperski, Andrew; Singh, Abhyudai
2017-08-01
In the stochastic description of biochemical reaction systems, the time evolution of statistical moments for species population counts is described by a linear dynamical system. However, except for some ideal cases (such as zero- and first-order reaction kinetics), the moment dynamics is underdetermined as lower-order moments depend upon higher-order moments. Here, we propose a novel method to find exact lower and upper bounds on stationary moments for a given arbitrary system of biochemical reactions. The method exploits the fact that statistical moments of any positive-valued random variable must satisfy some constraints that are compactly represented through the positive semidefiniteness of moment matrices. Our analysis shows that solving moment equations at steady state in conjunction with constraints on moment matrices provides exact lower and upper bounds on the moments. These results are illustrated by three different examples—the commonly used logistic growth model, stochastic gene expression with auto-regulation and an activator-repressor gene network motif. Interestingly, in all cases the accuracy of the bounds is shown to improve as moment equations are expanded to include higher-order moments. Our results provide avenues for development of approximation methods that provide explicit bounds on moments for nonlinear stochastic systems that are otherwise analytically intractable.
Yohan Lee; Jeremy S. Fried; Heidi J. Albers; Robert G. Haight
2013-01-01
We combine a scenario-based, standard-response optimization model with stochastic simulation to improve the efficiency of resource deployment for initial attack on wildland fires in three planning units in California. The optimization model minimizes the expected number of fires that do not receive a standard response--defined as the number of resources by type that...
Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul; ...
2017-12-20
We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul
We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less
A Stochastic Diffusion Process for the Dirichlet Distribution
Bakosi, J.; Ristorcelli, J. R.
2013-03-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less
NASA Astrophysics Data System (ADS)
Carpentier, Pierre-Luc
In this thesis, we consider the midterm production planning problem (MTPP) of hydroelectricity generation under uncertainty. The aim of this problem is to manage a set of interconnected hydroelectric reservoirs over several months. We are particularly interested in high dimensional reservoir systems that are operated by large hydroelectricity producers such as Hydro-Quebec. The aim of this thesis is to develop and evaluate different decomposition methods for solving the MTPP under uncertainty. This thesis is divided in three articles. The first article demonstrates the applicability of the progressive hedging algorithm (PHA), a scenario decomposition method, for managing hydroelectric reservoirs with multiannual storage capacity under highly variable operating conditions in Canada. The PHA is a classical stochastic optimization method designed to solve general multistage stochastic programs defined on a scenario tree. This method works by applying an augmented Lagrangian relaxation on non-anticipativity constraints (NACs) of the stochastic program. At each iteration of the PHA, a sequence of subproblems must be solved. Each subproblem corresponds to a deterministic version of the original stochastic program for a particular scenario in the scenario tree. Linear and a quadratic terms must be included in subproblem's objective functions to penalize any violation of NACs. An important limitation of the PHA is due to the fact that the number of subproblems to be solved and the number of penalty terms increase exponentially with the branching level in the tree. This phenomenon can make the application of the PHA particularly difficult when the scenario tree covers several tens of time periods. Another important limitation of the PHA is caused by the fact that the difficulty level of NACs generally increases as the variability of scenarios increases. Consequently, applying the PHA becomes particularly challenging in hydroclimatic regions that are characterized by a high level of seasonal and interannual variability. These two types of limitations can slow down the algorithm's convergence rate and increase the running time per iteration. In this study, we apply the PHA on Hydro-Quebec's power system over a 92-week planning horizon. Hydrologic uncertainty is represented by a scenario tree containing 6 branching stages and 1,635 nodes. The PHA is especially well-suited for this particular application given that the company already possess a deterministic optimization model to solve the MTPP. The second article presents a new approach which enhances the performance of the PHA for solving general Mstochastic programs. The proposed method works by applying a multiscenario decomposition scheme on the stochastic program. Our heuristic method aims at constructing an optimal partition of the scenario set by minimizing the number of NACs on which an augmented Lagrangean relaxation must be applied. Each subproblem is a stochastic program defined on a group of scenarios. NACs linking scenarios sharing a common group are represented implicitly in subproblems by using a group-node system index instead of the traditional scenario-time index system. Only the NACs that link the different scenario groups are represented explicitly and relaxed. The proposed method is evaluated numerically on an hydroelectric reservoir management problem in Quebec. The results of this experiment show that our method has several advantages. Firstly, it allows to reduce the running time per iteration of the PHA by reducing the number of penalty terms that are included in the objective function and by reducing the amount of duplicated constraints and variables. In turn, this allows to reduce the running time per iteration of the algorithm. Secondly, it allows to increase the algorithm's convergence rate by reducing the variability of intermediary solutions at duplicated tree nodes. Thirdly, our approach reduces the amount of random-access memory (RAM) required for storing Lagrange multipliers associated with relaxed NACs. The third article presents an extension of the L-Shaped method designed specifically for managing hydroelectric reservoir systems with a high storage capacity. The method proposed in this paper enables to consider a higher branching level than conventional decomposition method enables. To achieve this, we assume that the stochastic process driving random parameters has a memory loss at time period t = tau. Because of this assumption, the scenario tree possess a special symmetrical structure at the second stage (t > tau). We exploit this feature using a two-stage Benders decomposition method. Each decomposition stage covers several consecutive time periods. The proposed method works by constructing a convex and piecewise linear recourse function that represents the expected cost at the second stage in the master problem. The subproblem and the master problem are stochastic program defined on scenario subtrees and can be solved using a conventional decomposition method or directly. We test the proposed method on an hydroelectric power system in Quebec over a 104-week planning horizon. (Abstract shortened by UMI.).
Planning with Continuous Resources in Stochastic Domains
NASA Technical Reports Server (NTRS)
Mausam, Mausau; Benazera, Emmanuel; Brafman, Roneu; Hansen, Eric
2005-01-01
We consider the problem of optimal planning in stochastic domains with metric resource constraints. Our goal is to generate a policy whose expected sum of rewards is maximized for a given initial state. We consider a general formulation motivated by our application domain--planetary exploration--in which the choice of an action at each step may depend on the current resource levels. We adapt the forward search algorithm AO* to handle our continuous state space efficiently.
NASA Technical Reports Server (NTRS)
Johnson, E. H.
1975-01-01
The optimal design was investigated of simple structures subjected to dynamic loads, with constraints on the structures' responses. Optimal designs were examined for one dimensional structures excited by harmonically oscillating loads, similar structures excited by white noise, and a wing in the presence of continuous atmospheric turbulence. The first has constraints on the maximum allowable stress while the last two place bounds on the probability of failure of the structure. Approximations were made to replace the time parameter with a frequency parameter. For the first problem, this involved the steady state response, and in the remaining cases, power spectral techniques were employed to find the root mean square values of the responses. Optimal solutions were found by using computer algorithms which combined finite elements methods with optimization techniques based on mathematical programming. It was found that the inertial loads for these dynamic problems result in optimal structures that are radically different from those obtained for structures loaded statically by forces of comparable magnitude.
Towards Quantum Cybernetics:. Optimal Feedback Control in Quantum Bio Informatics
NASA Astrophysics Data System (ADS)
Belavkin, V. P.
2009-02-01
A brief account of the quantum information dynamics and dynamical programming methods for the purpose of optimal control in quantum cybernetics with convex constraints and cońcave cost and bequest functions of the quantum state is given. Consideration is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme with continuous observations we exploit the separation theorem of filtering and control aspects for quantum stochastic micro-dynamics of the total system. This allows to start with the Belavkin quantum filtering equation and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to only Hamiltonian terms in the filtering equation. A controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one.
Dung Tuan Nguyen
2012-01-01
Forest harvest scheduling has been modeled using deterministic and stochastic programming models. Past models seldom address explicit spatial forest management concerns under the influence of natural disturbances. In this research study, we employ multistage full recourse stochastic programming models to explore the challenges and advantages of building spatial...
NASA Astrophysics Data System (ADS)
McDonough, Kevin K.
The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of these sets for aircraft longitudinal and lateral aircraft dynamics are reported, and it is shown that these sets can be larger in size compared to the more commonly used safe sets. An approach to constrained maneuver planning based on chaining recoverable sets or integral safe sets is described and illustrated with a simulation example. To facilitate the application of this maneuver planning approach in aircraft loss of control (LOC) situations when the model is only identified at the current trim condition but when these sets need to be predicted at other flight conditions, the dependence trends of the safe and recoverable sets on aircraft flight conditions are characterized. The scaling procedure to estimate subsets of safe and recoverable sets at one trim condition based on their knowledge at another trim condition is defined. Finally, two control schemes that exploit integral safe sets are proposed. The first scheme, referred to as the controller state governor (CSG), resets the controller state (typically an integrator) to enforce the constraints and enlarge the set of plant states that can be recovered without constraint violation. The second scheme, referred to as the controller state and reference governor (CSRG), combines the controller state governor with the reference governor control architecture and provides the capability of simultaneously modifying the reference command and the controller state to enforce the constraints. Theoretical results that characterize the response properties of both schemes are presented. Examples are reported that illustrate the operation of these schemes on aircraft flight dynamics models and gas turbine engine dynamic models.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
A benders decomposition approach to multiarea stochastic distributed utility planning
NASA Astrophysics Data System (ADS)
McCusker, Susan Ann
Until recently, small, modular generation and storage options---distributed resources (DRs)---have been installed principally in areas too remote for economic power grid connection and sensitive applications requiring backup capacity. Recent regulatory changes and DR advances, however, have lead utilities to reconsider the role of DRs. To a utility facing distribution capacity bottlenecks or uncertain load growth, DRs can be particularly valuable since they can be dispersed throughout the system and constructed relatively quickly. DR value is determined by comparing its costs to avoided central generation expenses (i.e., marginal costs) and distribution investments. This requires a comprehensive central and local planning and production model, since central system marginal costs result from system interactions over space and time. This dissertation develops and applies an iterative generalized Benders decomposition approach to coordinate models for optimal DR evaluation. Three coordinated models exchange investment, net power demand, and avoided cost information to minimize overall expansion costs. Local investment and production decisions are made by a local mixed integer linear program. Central system investment decisions are made by a LP, and production costs are estimated by a stochastic multi-area production costing model with Kirchhoff's Voltage and Current Law constraints. The nested decomposition is a new and unique method for distributed utility planning that partitions the variables twice to separate local and central investment and production variables, and provides upper and lower bounds on expected expansion costs. Kirchhoff's Voltage Law imposes nonlinear, nonconvex constraints that preclude use of LP if transmission capacity is available in a looped transmission system. This dissertation develops KVL constraint approximations that permit the nested decomposition to consider new transmission resources, while maintaining linearity in the three individual models. These constraints are presented as a heuristic for the given examples; future research will investigate conditions for convergence. A ten-year multi-area example demonstrates the decomposition approach and suggests the ability of DRs and new transmission to modify capacity additions and production costs by changing demand and power flows. Results demonstrate that DR and new transmission options may lead to greater capacity additions, but resulting production cost savings more than offset extra capacity costs.
Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties
NASA Astrophysics Data System (ADS)
Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.
2017-12-01
Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.
Stochasticity in materials structure, properties, and processing—A review
NASA Astrophysics Data System (ADS)
Hull, Robert; Keblinski, Pawel; Lewis, Dan; Maniatty, Antoinette; Meunier, Vincent; Oberai, Assad A.; Picu, Catalin R.; Samuel, Johnson; Shephard, Mark S.; Tomozawa, Minoru; Vashishth, Deepak; Zhang, Shengbai
2018-03-01
We review the concept of stochasticity—i.e., unpredictable or uncontrolled fluctuations in structure, chemistry, or kinetic processes—in materials. We first define six broad classes of stochasticity: equilibrium (thermodynamic) fluctuations; structural/compositional fluctuations; kinetic fluctuations; frustration and degeneracy; imprecision in measurements; and stochasticity in modeling and simulation. In this review, we focus on the first four classes that are inherent to materials phenomena. We next develop a mathematical framework for describing materials stochasticity and then show how it can be broadly applied to these four materials-related stochastic classes. In subsequent sections, we describe structural and compositional fluctuations at small length scales that modify material properties and behavior at larger length scales; systems with engineered fluctuations, concentrating primarily on composite materials; systems in which stochasticity is developed through nucleation and kinetic phenomena; and configurations in which constraints in a given system prevent it from attaining its ground state and cause it to attain several, equally likely (degenerate) states. We next describe how stochasticity in these processes results in variations in physical properties and how these variations are then accentuated by—or amplify—stochasticity in processing and manufacturing procedures. In summary, the origins of materials stochasticity, the degree to which it can be predicted and/or controlled, and the possibility of using stochastic descriptions of materials structure, properties, and processing as a new degree of freedom in materials design are described.
Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions
Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.
2012-01-01
In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661
NASA Astrophysics Data System (ADS)
Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2016-07-01
This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.
Plasma Equilibria With Stochastic Magnetic Fields
NASA Astrophysics Data System (ADS)
Krommes, J. A.; Reiman, A. H.
2009-05-01
Plasma equilibria that include regions of stochastic magnetic fields are of interest in a variety of applications, including tokamaks with ergodic limiters and high-pressure stellarators. Such equilibria are examined theoretically, and a numerical algorithm for their construction is described.^2,3 % The balance between stochastic diffusion of magnetic lines and small effects^2 omitted from the simplest MHD description can support pressure and current profiles that need not be flattened in stochastic regions. The diffusion can be described analytically by renormalizing stochastic Langevin equations for pressure and parallel current j, with particular attention being paid to the satisfaction of the periodicity constraints in toroidal configurations with sheared magnetic fields. The equilibrium field configuration can then be constructed by coupling the prediction for j to Amp'ere's law, which is solved numerically. A. Reiman et al., Pressure-induced breaking of equilibrium flux surfaces in the W7AS stellarator, Nucl. Fusion 47, 572--8 (2007). J. A. Krommes and A. H. Reiman, Plasma equilibrium in a magnetic field with stochastic regions, submitted to Phys. Plasmas. J. A. Krommes, Fundamental statistical theories of plasma turbulence in magnetic fields, Phys. Reports 360, 1--351.
Constraints on Fluctuations in Sparsely Characterized Biological Systems.
Hilfinger, Andreas; Norman, Thomas M; Vinnicombe, Glenn; Paulsson, Johan
2016-02-05
Biochemical processes are inherently stochastic, creating molecular fluctuations in otherwise identical cells. Such "noise" is widespread but has proven difficult to analyze because most systems are sparsely characterized at the single cell level and because nonlinear stochastic models are analytically intractable. Here, we exactly relate average abundances, lifetimes, step sizes, and covariances for any pair of components in complex stochastic reaction systems even when the dynamics of other components are left unspecified. Using basic mathematical inequalities, we then establish bounds for whole classes of systems. These bounds highlight fundamental trade-offs that show how efficient assembly processes must invariably exhibit large fluctuations in subunit levels and how eliminating fluctuations in one cellular component requires creating heterogeneity in another.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malikopoulos, Andreas; Djouadi, Seddik M; Kuruganti, Teja
We consider the optimal stochastic control problem for home energy systems with solar and energy storage devices when the demand is realized from the grid. The demand is subject to Brownian motions with both drift and variance parameters modulated by a continuous-time Markov chain that represents the regime of electricity price. We model the systems as pure stochastic differential equation models, and then we follow the completing square technique to solve the stochastic home energy management problem. The effectiveness of the efficiency of the proposed approach is validated through a simulation example. For practical situations with constraints consistent to thosemore » studied here, our results imply the proposed framework could reduce the electricity cost from short-term purchase in peak hour market.« less
Constraints on Fluctuations in Sparsely Characterized Biological Systems
NASA Astrophysics Data System (ADS)
Hilfinger, Andreas; Norman, Thomas M.; Vinnicombe, Glenn; Paulsson, Johan
2016-02-01
Biochemical processes are inherently stochastic, creating molecular fluctuations in otherwise identical cells. Such "noise" is widespread but has proven difficult to analyze because most systems are sparsely characterized at the single cell level and because nonlinear stochastic models are analytically intractable. Here, we exactly relate average abundances, lifetimes, step sizes, and covariances for any pair of components in complex stochastic reaction systems even when the dynamics of other components are left unspecified. Using basic mathematical inequalities, we then establish bounds for whole classes of systems. These bounds highlight fundamental trade-offs that show how efficient assembly processes must invariably exhibit large fluctuations in subunit levels and how eliminating fluctuations in one cellular component requires creating heterogeneity in another.
A quantum-classical theory with nonlinear and stochastic dynamics
NASA Astrophysics Data System (ADS)
Burić, N.; Popović, D. B.; Radonjić, M.; Prvanović, S.
2014-12-01
The method of constrained dynamical systems on the quantum-classical phase space is utilized to develop a theory of quantum-classical hybrid systems. Effects of the classical degrees of freedom on the quantum part are modeled using an appropriate constraint, and the interaction also includes the effects of neglected degrees of freedom. Dynamical law of the theory is given in terms of nonlinear stochastic differential equations with Hamiltonian and gradient terms. The theory provides a successful dynamical description of the collapse during quantum measurement.
A theoretical stochastic control framework for adapting radiotherapy to hypoxia
NASA Astrophysics Data System (ADS)
Saberian, Fatemeh; Ghate, Archis; Kim, Minsun
2016-10-01
Hypoxia, that is, insufficient oxygen partial pressure, is a known cause of reduced radiosensitivity in solid tumors, and especially in head-and-neck tumors. It is thus believed to adversely affect the outcome of fractionated radiotherapy. Oxygen partial pressure varies spatially and temporally over the treatment course and exhibits inter-patient and intra-tumor variation. Emerging advances in non-invasive functional imaging offer the future possibility of adapting radiotherapy plans to this uncertain spatiotemporal evolution of hypoxia over the treatment course. We study the potential benefits of such adaptive planning via a theoretical stochastic control framework using computer-simulated evolution of hypoxia on computer-generated test cases in head-and-neck cancer. The exact solution of the resulting control problem is computationally intractable. We develop an approximation algorithm, called certainty equivalent control, that calls for the solution of a sequence of convex programs over the treatment course; dose-volume constraints are handled using a simple constraint generation method. These convex programs are solved using an interior point algorithm with a logarithmic barrier via Newton’s method and backtracking line search. Convexity of various formulations in this paper is guaranteed by a sufficient condition on radiobiological tumor-response parameters. This condition is expected to hold for head-and-neck tumors and for other similarly responding tumors where the linear dose-response parameter is larger than the quadratic dose-response parameter. We perform numerical experiments on four test cases by using a first-order vector autoregressive process with exponential and rational-quadratic covariance functions from the spatiotemporal statistics literature to simulate the evolution of hypoxia. Our results suggest that dynamic planning could lead to a considerable improvement in the number of tumor cells remaining at the end of the treatment course. Through these simulations, we also gain insights into when and why dynamic planning is likely to yield the largest benefits.
Stochastic Semidefinite Programming: Applications and Algorithms
2012-03-03
doi: 2011/09/07 13:38:21 13 TOTAL: 1 Number of Papers published in non peer-reviewed journals: Baha M. Alzalg and K. A. Ariyawansa, Stochastic...symmetric programming over integers. International Conference on Scientific Computing, Las Vegas, Nevada, July 18--21, 2011. Baha M. Alzalg. On recent...Proceeding publications (other than abstracts): PaperReceived Baha M. Alzalg, K. A. Ariyawansa. Stochastic mixed integer second-order cone programming
Stochastic Routing and Scheduling Policies for Energy Harvesting Communication Networks
NASA Astrophysics Data System (ADS)
Calvo-Fullana, Miguel; Anton-Haro, Carles; Matamoros, Javier; Ribeiro, Alejandro
2018-07-01
In this paper, we study the joint routing-scheduling problem in energy harvesting communication networks. Our policies, which are based on stochastic subgradient methods on the dual domain, act as an energy harvesting variant of the stochastic family of backpresure algorithms. Specifically, we propose two policies: (i) the Stochastic Backpressure with Energy Harvesting (SBP-EH), in which a node's routing-scheduling decisions are determined by the difference between the Lagrange multipliers associated to their queue stability constraints and their neighbors'; and (ii) the Stochastic Soft Backpressure with Energy Harvesting (SSBP-EH), an improved algorithm where the routing-scheduling decision is of a probabilistic nature. For both policies, we show that given sustainable data and energy arrival rates, the stability of the data queues over all network nodes is guaranteed. Numerical results corroborate the stability guarantees and illustrate the minimal gap in performance that our policies offer with respect to classical ones which work with an unlimited energy supply.
NASA Astrophysics Data System (ADS)
Verma, Arun; Smith, Terry; Punjabi, Alkesh; Boozer, Allen
1996-11-01
In this work, we investigate the effects of low MN perturbations in a single-null divertor tokamak with stochastic scrape-off layer. The unperturbed magnetic topology of a single-null divertor tokamak is represented by Simple Map (Punjabi A, Verma A and Boozer A, Phys Rev Lett), 69, 3322 (1992) and J Plasma Phys, 52, 91 (1994). We choose the combinations of the map parameter k, and the strength of the low MN perturbation such that the width of stochastic layer remains unchanged. We give detailed results on the effects of low MN perturbation on the magnetic topology of the stochastic layer and on the footprint of field lines on the divertor plate given the constraint of constant width of the stochastic layer. The low MN perturbations occur naturally and therefore their effects are of considerable importance in tokamak divertor physics. This work is supported by US DOE OFES. Use of CRAY at HU and at NERSC is gratefully acknowledged.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
NASA Astrophysics Data System (ADS)
Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.
2018-06-01
A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fei; Huang, Yongxi
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Xie, Fei; Huang, Yongxi
2018-02-04
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Stochastic Dynamic Mixed-Integer Programming (SD-MIP)
2015-05-05
stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g
Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L
2015-02-01
Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.
NASA Technical Reports Server (NTRS)
Dahl, Roy W.; Keating, Karen; Salamone, Daryl J.; Levy, Laurence; Nag, Barindra; Sanborn, Joan A.
1987-01-01
This paper presents an algorithm (WHAMII) designed to solve the Artificial Intelligence Design Challenge at the 1987 AIAA Guidance, Navigation and Control Conference. The problem under consideration is a stochastic generalization of the traveling salesman problem in which travel costs can incur a penalty with a given probability. The variability in travel costs leads to a probability constraint with respect to violating the budget allocation. Given the small size of the problem (eleven cities), an approach is considered that combines partial tour enumeration with a heuristic city insertion procedure. For computational efficiency during both the enumeration and insertion procedures, precalculated binomial probabilities are used to determine an upper bound on the actual probability of violating the budget constraint for each tour. The actual probability is calculated for the final best tour, and additional insertions are attempted until the actual probability exceeds the bound.
On the physical realizability of quantum stochastic walks
NASA Astrophysics Data System (ADS)
Taketani, Bruno; Govia, Luke; Schuhmacher, Peter; Wilhelm, Frank
Quantum walks are a promising framework that can be used to both understand and implement quantum information processing tasks. The recently developed quantum stochastic walk combines the concepts of a quantum walk and a classical random walk through open system evolution of a quantum system, and have been shown to have applications in as far reaching fields as artificial intelligence. However, nature puts significant constraints on the kind of open system evolutions that can be realized in a physical experiment. In this work, we discuss the restrictions on the allowed open system evolution, and the physical assumptions underpinning them. We then introduce a way to circumvent some of these restrictions, and simulate a more general quantum stochastic walk on a quantum computer, using a technique we call quantum trajectories on a quantum computer. We finally describe a circuit QED approach to implement discrete time quantum stochastic walks.
Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems
Fonseca Guerra, Gabriel A.; Furber, Steve B.
2017-01-01
Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart. PMID:29311791
The importance of environmental variability and management control error to optimal harvest policies
Hunter, C.M.; Runge, M.C.
2004-01-01
State-dependent strategies (SDSs) are the most general form of harvest policy because they allow the harvest rate to depend, without constraint, on the state of the system. State-dependent strategies that provide an optimal harvest rate for any system state can be calculated, and stochasticity can be appropriately accommodated in this optimization. Stochasticity poses 2 challenges to harvest policies: (1) the population will never be at the equilibrium state; and (2) stochasticity induces uncertainty about future states. We investigated the effects of 2 types of stochasticity, environmental variability and management control error, on SDS harvest policies for a white-tailed deer (Odocoileus virginianus) model, and contrasted these with a harvest policy based on maximum sustainable yield (MSY). Increasing stochasticity resulted in more conservative SDSs; that is, higher population densities were required to support the same harvest rate, but these effects were generally small. As stochastic effects increased, SDSs performed much better than MSY. Both deterministic and stochastic SDSs maintained maximum mean annual harvest yield (AHY) and optimal equilibrium population size (Neq) in a stochastic environment, whereas an MSY policy could not. We suggest 3 rules of thumb for harvest management of long-lived vertebrates in stochastic systems: (1) an SDS is advantageous over an MSY policy, (2) using an SDS rather than an MSY is more important than whether a deterministic or stochastic SDS is used, and (3) for SDSs, rankings of the variability in management outcomes (e.g., harvest yield) resulting from parameter stochasticity can be predicted by rankings of the deterministic elasticities.
A spatial stochastic programming model for timber and core area management under risk of fires
Yu Wei; Michael Bevers; Dung Nguyen; Erin Belval
2014-01-01
Previous stochastic models in harvest scheduling seldom address explicit spatial management concerns under the influence of natural disturbances. We employ multistage stochastic programming models to explore the challenges and advantages of building spatial optimization models that account for the influences of random stand-replacing fires. Our exploratory test models...
Yu, Huapeng; Zhu, Hai; Gao, Dayuan; Yu, Meng; Wu, Wenqi
2015-01-01
The Kalman filter (KF) has always been used to improve north-finding performance under practical conditions. By analyzing the characteristics of the azimuth rotational inertial measurement unit (ARIMU) on a stationary base, a linear state equality constraint for the conventional KF used in the fine north-finding filtering phase is derived. Then, a constrained KF using the state equality constraint is proposed and studied in depth. Estimation behaviors of the concerned navigation errors when implementing the conventional KF scheme and the constrained KF scheme during stationary north-finding are investigated analytically by the stochastic observability approach, which can provide explicit formulations of the navigation errors with influencing variables. Finally, multiple practical experimental tests at a fixed position are done on a postulate system to compare the stationary north-finding performance of the two filtering schemes. In conclusion, this study has successfully extended the utilization of the stochastic observability approach for analytic descriptions of estimation behaviors of the concerned navigation errors, and the constrained KF scheme has demonstrated its superiority over the conventional KF scheme for ARIMU stationary north-finding both theoretically and practically. PMID:25688588
Learning in stochastic neural networks for constraint satisfaction problems
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Adorf, Hans-Martin
1989-01-01
Researchers describe a newly-developed artificial neural network algorithm for solving constraint satisfaction problems (CSPs) which includes a learning component that can significantly improve the performance of the network from run to run. The network, referred to as the Guarded Discrete Stochastic (GDS) network, is based on the discrete Hopfield network but differs from it primarily in that auxiliary networks (guards) are asymmetrically coupled to the main network to enforce certain types of constraints. Although the presence of asymmetric connections implies that the network may not converge, it was found that, for certain classes of problems, the network often quickly converges to find satisfactory solutions when they exist. The network can run efficiently on serial machines and can find solutions to very large problems (e.g., N-queens for N as large as 1024). One advantage of the network architecture is that network connection strengths need not be instantiated when the network is established: they are needed only when a participating neural element transitions from off to on. They have exploited this feature to devise a learning algorithm, based on consistency techniques for discrete CSPs, that updates the network biases and connection strengths and thus improves the network performance.
NASA Astrophysics Data System (ADS)
Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.
2014-04-01
In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.
NASA Astrophysics Data System (ADS)
Lauterbach, S.; Fina, M.; Wagner, W.
2018-04-01
Since structural engineering requires highly developed and optimized structures, the thickness dependency is one of the most controversially debated topics. This paper deals with stability analysis of lightweight thin structures combined with arbitrary geometrical imperfections. Generally known design guidelines only consider imperfections for simple shapes and loading, whereas for complex structures the lower-bound design philosophy still holds. Herein, uncertainties are considered with an empirical knockdown factor representing a lower bound of existing measurements. To fully understand and predict expected bearable loads, numerical investigations are essential, including geometrical imperfections. These are implemented into a stand-alone program code with a stochastic approach to compute random fields as geometric imperfections that are applied to nodes of the finite element mesh of selected structural examples. The stochastic approach uses the Karhunen-Loève expansion for the random field discretization. For this approach, the so-called correlation length l_c controls the random field in a powerful way. This parameter has a major influence on the buckling shape, and also on the stability load. First, the impact of the correlation length is studied for simple structures. Second, since most structures for engineering devices are more complex and combined structures, these are intensively discussed with the focus on constrained random fields for e.g. flange-web-intersections. Specific constraints for those random fields are pointed out with regard to the finite element model. Further, geometrical imperfections vanish where the structure is supported.
Self-Organization by Stochastic Reconnection: The Mechanism Underlying CMEs/Flares
NASA Astrophysics Data System (ADS)
Antiochos, S. K.; Knizhnik, K. J.; DeVore, C. R.
2017-12-01
The largest explosions in the solar system are the giant CMEs/flares that produce the most dangerous space weather at Earth, yet may also have been essential for the origin of life. The root cause of CMEs/flares is that the lowest-lying magnetic field lines in the Sun's corona undergo the continual buildup of stress and free energy that can be released only through explosive ejection. We perform the first MHD simulations of a coronal-photospheric magnetic system that is driven by random photospheric convective flows and has a realistic geometry for the coronal field. Furthermore, our simulations accurately preserve the key constraint of magnetic helicity. We find that even though small-scale stress is injected randomly throughout the corona, the net result of "stochastic" coronal reconnection is a coherent stretching of the lowest-lying field lines. This highly counter-intuitive demonstration of self-organization - magnetic stress builds up locally rather than spreading out to a minimum energy state - is the fundamental mechanism responsible for the Sun's magnetic explosions and is likely to be a mechanism that is ubiquitous throughout space and laboratory plasmas. This work was supported in part by the NASA LWS and SR Programs.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2016-06-01
The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785
Zhang, Xiaodong; Huang, Guo H; Nie, Xianghui
2009-12-20
Nonpoint source (NPS) water pollution is one of serious environmental issues, especially within an agricultural system. This study aims to propose a robust chance-constrained fuzzy possibilistic programming (RCFPP) model for water quality management within an agricultural system, where solutions for farming area, manure/fertilizer application amount, and livestock husbandry size under different scenarios are obtained and interpreted. Through improving upon the existing fuzzy possibilistic programming, fuzzy robust programming and chance-constrained programming approaches, the RCFPP can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original fuzzy constraints, the RCFPP enhances the robustness of the optimization processes and resulting solutions. The results of the case study indicate that useful information can be obtained through the proposed RCFPP model for providing feasible decision schemes for different agricultural activities under different scenarios (combinations of different p-necessity and p(i) levels). A p-necessity level represents the certainty or necessity degree of the imprecise objective function, while a p(i) level means the probabilities at which the constraints will be violated. A desire to acquire high agricultural income would decrease the certainty degree of the event that maximization of the objective be satisfied, and potentially violate water management standards; willingness to accept low agricultural income will run into the risk of potential system failure. The decision variables under combined p-necessity and p(i) levels were useful for the decision makers to justify and/or adjust the decision schemes for the agricultural activities through incorporation of their implicit knowledge. The results also suggest that this developed approach is applicable to many practical problems where fuzzy and probabilistic distribution information simultaneously exist.
Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints
NASA Technical Reports Server (NTRS)
Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren
2015-01-01
Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.
Wang, S; Huang, G H
2013-03-15
Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints. Copyright © 2013 Elsevier Ltd. All rights reserved.
Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
NASA Astrophysics Data System (ADS)
Ubaidulla, P.; Chockalingam, A.
2009-12-01
We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
Automated Flight Routing Using Stochastic Dynamic Programming
NASA Technical Reports Server (NTRS)
Ng, Hok K.; Morando, Alex; Grabbe, Shon
2010-01-01
Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.
Physical realizability of continuous-time quantum stochastic walks
NASA Astrophysics Data System (ADS)
Taketani, Bruno G.; Govia, Luke C. G.; Wilhelm, Frank K.
2018-05-01
Quantum walks are a promising methodology that can be used to both understand and implement quantum information processing tasks. The quantum stochastic walk is a recently developed framework that combines the concept of a quantum walk with that of a classical random walk, through open system evolution of a quantum system. Quantum stochastic walks have been shown to have applications in as far reaching fields as artificial intelligence. However, there are significant constraints on the kind of open system evolutions that can be realized in a physical experiment. In this work, we discuss the restrictions on the allowed open system evolution and the physical assumptions underpinning them. We show that general direct implementations would require the complete solution of the underlying unitary dynamics and sophisticated reservoir engineering, thus weakening the benefits of experimental implementation.
Adaptiveness in monotone pseudo-Boolean optimization and stochastic neural computation.
Grossi, Giuliano
2009-08-01
Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph theory.
NASA Astrophysics Data System (ADS)
Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi
2017-09-01
Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.
Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation
Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir
2016-05-01
We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Chang, Wen-Jer; Huang, Bo-Jyun
2014-11-01
The multi-constrained robust fuzzy control problem is investigated in this paper for perturbed continuous-time nonlinear stochastic systems. The nonlinear system considered in this paper is represented by a Takagi-Sugeno fuzzy model with perturbations and state multiplicative noises. The multiple performance constraints considered in this paper include stability, passivity and individual state variance constraints. The Lyapunov stability theory is employed to derive sufficient conditions to achieve the above performance constraints. By solving these sufficient conditions, the contribution of this paper is to develop a parallel distributed compensation based robust fuzzy control approach to satisfy multiple performance constraints for perturbed nonlinear systems with multiplicative noises. At last, a numerical example for the control of perturbed inverted pendulum system is provided to illustrate the applicability and effectiveness of the proposed multi-constrained robust fuzzy control method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Weak Galilean invariance as a selection principle for coarse-grained diffusive models.
Cairoli, Andrea; Klages, Rainer; Baule, Adrian
2018-05-29
How does the mathematical description of a system change in different reference frames? Galilei first addressed this fundamental question by formulating the famous principle of Galilean invariance. It prescribes that the equations of motion of closed systems remain the same in different inertial frames related by Galilean transformations, thus imposing strong constraints on the dynamical rules. However, real world systems are often described by coarse-grained models integrating complex internal and external interactions indistinguishably as friction and stochastic forces. Since Galilean invariance is then violated, there is seemingly no alternative principle to assess a priori the physical consistency of a given stochastic model in different inertial frames. Here, starting from the Kac-Zwanzig Hamiltonian model generating Brownian motion, we show how Galilean invariance is broken during the coarse-graining procedure when deriving stochastic equations. Our analysis leads to a set of rules characterizing systems in different inertial frames that have to be satisfied by general stochastic models, which we call "weak Galilean invariance." Several well-known stochastic processes are invariant in these terms, except the continuous-time random walk for which we derive the correct invariant description. Our results are particularly relevant for the modeling of biological systems, as they provide a theoretical principle to select physically consistent stochastic models before a validation against experimental data.
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
2016-02-01
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
NASA Astrophysics Data System (ADS)
Lu, Shasha; Guan, Xingliang; Zhou, Min; Wang, Yang
2014-05-01
A large number of mathematical models have been developed to support land resource allocation decisions and land management needs; however, few of them can address various uncertainties that exist in relation to many factors presented in such decisions (e.g., land resource availabilities, land demands, land-use patterns, and social demands, as well as ecological requirements). In this study, a multi-objective interval-stochastic land resource allocation model (MOISLAM) was developed for tackling uncertainty that presents as discrete intervals and/or probability distributions. The developed model improves upon the existing multi-objective programming and inexact optimization approaches. The MOISLAM not only considers economic factors, but also involves food security and eco-environmental constraints; it can, therefore, effectively reflect various interrelations among different aspects in a land resource management system. Moreover, the model can also help examine the reliability of satisfying (or the risk of violating) system constraints under uncertainty. In this study, the MOISLAM was applied to a real case of long-term urban land resource allocation planning in Suzhou, in the Yangtze River Delta of China. Interval solutions associated with different risk levels of constraint violation were obtained. The results are considered useful for generating a range of decision alternatives under various system conditions, and thus helping decision makers to identify a desirable land resource allocation strategy under uncertainty.
On the interpretations of Langevin stochastic equation in different coordinate systems
NASA Astrophysics Data System (ADS)
Martínez, E.; López-Díaz, L.; Torres, L.; Alejos, O.
2004-01-01
The stochastic Langevin Landau-Lifshitz equation is usually utilized in micromagnetics formalism to account for thermal effects. Commonly, two different interpretations of the stochastic integrals can be made: Ito and Stratonovich. In this work, the Langevin-Landau-Lifshitz (LLL) equation is written in both Cartesian and Spherical coordinates. If Spherical coordinates are employed, the noise is additive, and therefore, Ito and Stratonovich solutions are equal. This is not the case when (LLL) equation is written in Cartesian coordinates. In this case, the Langevin equation must be interpreted in the Stratonovich sense in order to reproduce correct statistical results. Nevertheless, the statistics of the numerical results obtained from Euler-Ito and Euler-Stratonovich schemes are equivalent due to the additional numerical constraint imposed in Cartesian system after each time step, which itself assures that the magnitude of the magnetization is preserved.
A stochastic diffusion process for Lochner's generalized Dirichlet distribution
Bakosi, J.; Ristorcelli, J. R.
2013-10-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less
Modeling global macroclimatic constraints on ectotherm energy budgets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grant, B.W.; Porter, W.P.
1992-12-31
The authors describe a mechanistic individual-based model of how global macroclimatic constraints affect the energy budgets of ectothermic animals. The model uses macroclimatic and biophysical characters of the habitat and organism and tenets of heat transfer theory to calculate hourly temperature availabilities over a year. Data on the temperature dependence of activity rate, metabolism, food consumption and food processing capacity are used to estimate the net rate of resource assimilation which is then integrated over time. They present a new test of this model in which they show that the predicted energy budget sizes for 11 populations of the lizardmore » Sceloporus undulates are in close agreement with observed results from previous field studies. This demonstrates that model tests rae feasible and the results are reasonable. Further, since the model represents an upper bound to the size of the energy budget, observed residual deviations form explicit predictions about the effects of environmental constraints on the bioenergetics of the study lizards within each site that may be tested by future field and laboratory studies. Three major new improvements to the modeling are discussed. They present a means to estimate microclimate thermal heterogeneity more realistically and include its effects on field rates of individual activity and food consumption. Second, they describe an improved model of digestive function involving batch processing of consumed food. Third, they show how optimality methods (specifically the methods of stochastic dynamic programming) may be included to model the fitness consequences of energy allocation decisions subject to food consumption and processing constraints which are predicted from the microclimate and physiological modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, F.P.; Dai, J.; Kerans, C.
1998-11-01
In part 1 of this paper, the authors discussed the rock-fabric/petrophysical classes for dolomitized carbonate-ramp rocks, the effects of rock fabric and pore type on petrophysical properties, petrophysical models for analyzing wireline logs, the critical scales for defining geologic framework, and 3-D geologic modeling. Part 2 focuses on geophysical and engineering characterizations, including seismic modeling, reservoir geostatistics, stochastic modeling, and reservoir simulation. Synthetic seismograms of 30 to 200 Hz were generated to study the level of seismic resolution required to capture the high-frequency geologic features in dolomitized carbonate-ramp reservoirs. Outcrop data were collected to investigate effects of sampling interval andmore » scale-up of block size on geostatistical parameters. Semivariogram analysis of outcrop data showed that the sill of log permeability decreases and the correlation length increases with an increase of horizontal block size. Permeability models were generated using conventional linear interpolation, stochastic realizations without stratigraphic constraints, and stochastic realizations with stratigraphic constraints. Simulations of a fine-scale Lawyer Canyon outcrop model were used to study the factors affecting waterflooding performance. Simulation results show that waterflooding performance depends strongly on the geometry and stacking pattern of the rock-fabric units and on the location of production and injection wells.« less
Stochastic online appointment scheduling of multi-step sequential procedures in nuclear medicine.
Pérez, Eduardo; Ntaimo, Lewis; Malavé, César O; Bailey, Carla; McCormack, Peter
2013-12-01
The increased demand for medical diagnosis procedures has been recognized as one of the contributors to the rise of health care costs in the U.S. in the last few years. Nuclear medicine is a subspecialty of radiology that uses advanced technology and radiopharmaceuticals for the diagnosis and treatment of medical conditions. Procedures in nuclear medicine require the use of radiopharmaceuticals, are multi-step, and have to be performed under strict time window constraints. These characteristics make the scheduling of patients and resources in nuclear medicine challenging. In this work, we derive a stochastic online scheduling algorithm for patient and resource scheduling in nuclear medicine departments which take into account the time constraints imposed by the decay of the radiopharmaceuticals and the stochastic nature of the system when scheduling patients. We report on a computational study of the new methodology applied to a real clinic. We use both patient and clinic performance measures in our study. The results show that the new method schedules about 600 more patients per year on average than a scheduling policy that was used in practice by improving the way limited resources are managed at the clinic. The new methodology finds the best start time and resources to be used for each appointment. Furthermore, the new method decreases patient waiting time for an appointment by about two days on average.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
NASA Astrophysics Data System (ADS)
Arzoumanian, Z.; Baker, P. T.; Brazier, A.; Burke-Spolaor, S.; Chamberlin, S. J.; Chatterjee, S.; Christy, B.; Cordes, J. M.; Cornish, N. J.; Crawford, F.; Thankful Cromartie, H.; Crowter, K.; DeCesar, M.; Demorest, P. B.; Dolch, T.; Ellis, J. A.; Ferdman, R. D.; Ferrara, E.; Folkner, W. M.; Fonseca, E.; Garver-Daniels, N.; Gentile, P. A.; Haas, R.; Hazboun, J. S.; Huerta, E. A.; Islo, K.; Jones, G.; Jones, M. L.; Kaplan, D. L.; Kaspi, V. M.; Lam, M. T.; Lazio, T. J. W.; Levin, L.; Lommen, A. N.; Lorimer, D. R.; Luo, J.; Lynch, R. S.; Madison, D. R.; McLaughlin, M. A.; McWilliams, S. T.; Mingarelli, C. M. F.; Ng, C.; Nice, D. J.; Park, R. S.; Pennucci, T. T.; Pol, N. S.; Ransom, S. M.; Ray, P. S.; Rasskazov, A.; Siemens, X.; Simon, J.; Spiewak, R.; Stairs, I. H.; Stinebring, D. R.; Stovall, K.; Swiggum, J.; Taylor, S. R.; Vallisneri, M.; van Haasteren, R.; Vigeland, S.; Zhu, W. W.; The NANOGrav Collaboration
2018-05-01
We search for an isotropic stochastic gravitational-wave background (GWB) in the newly released 11 year data set from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav). While we find no evidence for a GWB, we place constraints on a population of inspiraling supermassive black hole (SMBH) binaries, a network of decaying cosmic strings, and a primordial GWB. For the first time, we find that the GWB constraints are sensitive to the solar system ephemeris (SSE) model used and that SSE errors can mimic a GWB signal. We developed an approach that bridges systematic SSE differences, producing the first pulsar-timing array (PTA) constraints that are robust against SSE errors. We thus place a 95% upper limit on the GW-strain amplitude of A GWB < 1.45 × 10‑15 at a frequency of f = 1 yr‑1 for a fiducial f ‑2/3 power-law spectrum and with interpulsar correlations modeled. This is a factor of ∼2 improvement over the NANOGrav nine-year limit calculated using the same procedure. Previous PTA upper limits on the GWB (as well as their astrophysical and cosmological interpretations) will need revision in light of SSE systematic errors. We use our constraints to characterize the combined influence on the GWB of the stellar mass density in galactic cores, the eccentricity of SMBH binaries, and SMBH–galactic-bulge scaling relationships. We constrain the cosmic-string tension using recent simulations, yielding an SSE-marginalized 95% upper limit of Gμ < 5.3 × 10‑11—a factor of ∼2 better than the published NANOGrav nine-year constraints. Our SSE-marginalized 95% upper limit on the energy density of a primordial GWB (for a radiation-dominated post-inflation universe) is ΩGWB(f) h 2 < 3.4 × 10‑10.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
NASA Astrophysics Data System (ADS)
Allah Taleizadeh, Ata; Niaki, Seyed Taghi Akhavan; Aryanezhad, Mir-Bahador
2010-10-01
While the usual assumptions in multi-periodic inventory control problems are that the orders are placed at the beginning of each period (periodic review) or depending on the inventory level they can happen at any time (continuous review), in this article, we relax these assumptions and assume that the periods between two replenishments of the products are independent and identically distributed random variables. Furthermore, assuming that the purchasing price are triangular fuzzy variables, the quantities of the orders are of integer-type and that there are space and service level constraints, total discount are considered to purchase products and a combination of back-order and lost-sales are taken into account for the shortages. We show that the model of this problem is a fuzzy mixed-integer nonlinear programming type and in order to solve it, a hybrid meta-heuristic intelligent algorithm is proposed. At the end, a numerical example is given to demonstrate the applicability of the proposed methodology and to compare its performance with one of the existing algorithms in real world inventory control problems.
Constrained model predictive control, state estimation and coordination
NASA Astrophysics Data System (ADS)
Yan, Jun
In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to guarantee local stability or convergence to a target state. If these conditions are met for all subsystems, then this stability is inherited by the overall system. For the case when each subsystem suffers from disturbances in the dynamics, own self-measurement noises, and quantization errors on neighbors' information due to the finite-bit-rate channels, the constrained MPC strategy developed in Part (i) is appropriate to apply. In Part (iii), we discuss the local predictor design and bandwidth assignment problem in a coordinated vehicle formation context. The MPC controller used in Part (ii) relates the formation control performance and the information quality in the way that large standoff implies conservative performance. We first develop an LMI (Linear Matrix Inequality) formulation for cross-estimator design in a simple two-vehicle scenario with non-standard information: one vehicle does not have access to the other's exact control value applied at each sampling time, but to its known, pre-computed, coupling linear feedback control law. Then a similar LMI problem is formulated for the bandwidth assignment problem that minimizes the total number of bits by adjusting the prediction gain matrices and the number of bits assigned to each variable. (Abstract shortened by UMI.)
Engineered Resilient Systems: Knowledge Capture and Transfer
2014-08-29
development, but the work has not progressed significantly. 71 Peter Kall and Stein W. Wallace, Stochastic Programming, John Wiley & Sons, Chichester, 1994...John Wiley and Sons: Hoboken, 2008. Peter Kall and Stein W. Wallace, Stochastic Programming, John Wiley & Sons, Chichester, 1994. Rhodes, D.H., Lamb
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Stochastic Feedforward Control Technique
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1990-01-01
Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.
The isolation limits of stochastic vibration
NASA Technical Reports Server (NTRS)
Knopse, C. R.; Allaire, P. E.
1993-01-01
The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.
Limits on Anisotropy in the Nanohertz Stochastic Gravitational Wave Background.
Taylor, S R; Mingarelli, C M F; Gair, J R; Sesana, A; Theureau, G; Babak, S; Bassa, C G; Brem, P; Burgay, M; Caballero, R N; Champion, D J; Cognard, I; Desvignes, G; Guillemot, L; Hessels, J W T; Janssen, G H; Karuppusamy, R; Kramer, M; Lassus, A; Lazarus, P; Lentati, L; Liu, K; Osłowski, S; Perrodin, D; Petiteau, A; Possenti, A; Purver, M B; Rosado, P A; Sanidas, S A; Smits, R; Stappers, B; Tiburzi, C; van Haasteren, R; Vecchio, A; Verbiest, J P W
2015-07-24
The paucity of observed supermassive black hole binaries (SMBHBs) may imply that the gravitational wave background (GWB) from this population is anisotropic, rendering existing analyses suboptimal. We present the first constraints on the angular distribution of a nanohertz stochastic GWB from circular, inspiral-driven SMBHBs using the 2015 European Pulsar Timing Array data. Our analysis of the GWB in the ~2-90 nHz band shows consistency with isotropy, with the strain amplitude in l>0 spherical harmonic multipoles ≲40% of the monopole value. We expect that these more general techniques will become standard tools to probe the angular distribution of source populations.
Limits on Anisotropy in the Nanohertz Stochastic Gravitational Wave Background
NASA Astrophysics Data System (ADS)
Taylor, S. R.; Mingarelli, C. M. F.; Gair, J. R.; Sesana, A.; Theureau, G.; Babak, S.; Bassa, C. G.; Brem, P.; Burgay, M.; Caballero, R. N.; Champion, D. J.; Cognard, I.; Desvignes, G.; Guillemot, L.; Hessels, J. W. T.; Janssen, G. H.; Karuppusamy, R.; Kramer, M.; Lassus, A.; Lazarus, P.; Lentati, L.; Liu, K.; Osłowski, S.; Perrodin, D.; Petiteau, A.; Possenti, A.; Purver, M. B.; Rosado, P. A.; Sanidas, S. A.; Smits, R.; Stappers, B.; Tiburzi, C.; van Haasteren, R.; Vecchio, A.; Verbiest, J. P. W.; EPTA Collaboration
2015-07-01
The paucity of observed supermassive black hole binaries (SMBHBs) may imply that the gravitational wave background (GWB) from this population is anisotropic, rendering existing analyses suboptimal. We present the first constraints on the angular distribution of a nanohertz stochastic GWB from circular, inspiral-driven SMBHBs using the 2015 European Pulsar Timing Array data. Our analysis of the GWB in the ˜2 - 90 nHz band shows consistency with isotropy, with the strain amplitude in l >0 spherical harmonic multipoles ≲40 % of the monopole value. We expect that these more general techniques will become standard tools to probe the angular distribution of source populations.
Detecting the Stochastic Gravitational-Wave Background
NASA Astrophysics Data System (ADS)
Colacino, Carlo Nicola
2017-12-01
The stochastic gravitational-wave background (SGWB) is by far the most difficult source of gravitational radiation detect. At the same time, it is the most interesting and intriguing one. This book describes the initial detection of the SGWB and describes the underlying mathematics behind one of the most amazing discoveries of the 21st century. On the experimental side it would mean that interferometric gravitational wave detectors work even better than expected. On the observational side, such a detection could give us information about the very early Universe, information that could not be obtained otherwise. Even negative results and improved upper bounds could put constraints on many cosmological and particle physics models.
Gravitational-wave stochastic background from cosmic strings.
Siemens, Xavier; Mandic, Vuk; Creighton, Jolien
2007-03-16
We consider the stochastic background of gravitational waves produced by a network of cosmic strings and assess their accessibility to current and planned gravitational wave detectors, as well as to big bang nucleosynthesis (BBN), cosmic microwave background (CMB), and pulsar timing constraints. We find that current data from interferometric gravitational wave detectors, such as Laser Interferometer Gravitational Wave Observatory (LIGO), are sensitive to areas of parameter space of cosmic string models complementary to those accessible to pulsar, BBN, and CMB bounds. Future more sensitive LIGO runs and interferometers such as Advanced LIGO and Laser Interferometer Space Antenna (LISA) will be able to explore substantial parts of the parameter space.
MEANS: python package for Moment Expansion Approximation, iNference and Simulation
Fan, Sisi; Geissmann, Quentin; Lakatos, Eszter; Lukauskas, Saulius; Ale, Angelique; Babtie, Ann C.; Kirk, Paul D. W.; Stumpf, Michael P. H.
2016-01-01
Motivation: Many biochemical systems require stochastic descriptions. Unfortunately these can only be solved for the simplest cases and their direct simulation can become prohibitively expensive, precluding thorough analysis. As an alternative, moment closure approximation methods generate equations for the time-evolution of the system’s moments and apply a closure ansatz to obtain a closed set of differential equations; that can become the basis for the deterministic analysis of the moments of the outputs of stochastic systems. Results: We present a free, user-friendly tool implementing an efficient moment expansion approximation with parametric closures that integrates well with the IPython interactive environment. Our package enables the analysis of complex stochastic systems without any constraints on the number of species and moments studied and the type of rate laws in the system. In addition to the approximation method our package provides numerous tools to help non-expert users in stochastic analysis. Availability and implementation: https://github.com/theosysbio/means Contacts: m.stumpf@imperial.ac.uk or e.lakatos13@imperial.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153663
MEANS: python package for Moment Expansion Approximation, iNference and Simulation.
Fan, Sisi; Geissmann, Quentin; Lakatos, Eszter; Lukauskas, Saulius; Ale, Angelique; Babtie, Ann C; Kirk, Paul D W; Stumpf, Michael P H
2016-09-15
Many biochemical systems require stochastic descriptions. Unfortunately these can only be solved for the simplest cases and their direct simulation can become prohibitively expensive, precluding thorough analysis. As an alternative, moment closure approximation methods generate equations for the time-evolution of the system's moments and apply a closure ansatz to obtain a closed set of differential equations; that can become the basis for the deterministic analysis of the moments of the outputs of stochastic systems. We present a free, user-friendly tool implementing an efficient moment expansion approximation with parametric closures that integrates well with the IPython interactive environment. Our package enables the analysis of complex stochastic systems without any constraints on the number of species and moments studied and the type of rate laws in the system. In addition to the approximation method our package provides numerous tools to help non-expert users in stochastic analysis. https://github.com/theosysbio/means m.stumpf@imperial.ac.uk or e.lakatos13@imperial.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Approximate Dynamic Programming and Aerial Refueling
2007-06-01
by two Army Air Corps de Havilland DH -4Bs (9). While crude by modern standards, the passing of hoses be- tween planes is effectively the same approach...incorporating stochastic data sets. . . . . . . . . . . 106 55 Total Cost Stochastically Trained Simulations versus Deterministically Trained Simulations...incorporating stochastic data sets. 106 To create meaningful results when testing stochastic data, the data sets are av- eraged so that conclusions are not
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction
2016-02-25
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction We have completed a short program of theoretical research...on dimensional reduction and approximation of models based on quantum stochastic differential equations. Our primary results lie in the area of...2211 quantum probability, quantum stochastic differential equations REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR
A neutral model of low-severity fire regimes
Don McKenzie; Amy E. Hessl
2008-01-01
Climate, topography, fuel loadings, and human activities all affect spatial and temporal patterns of fire occurrence. Because fire occurrence is a stochastic process, an understanding of baseline variability is necessary in order to identify constraints on surface fire regimes. With a suitable null, or neutral, model, characteristics of natural fire regimes estimated...
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Stochastic modelling of turbulent combustion for design optimization of gas turbine combustors
NASA Astrophysics Data System (ADS)
Mehanna Ismail, Mohammed Ali
The present work covers the development and the implementation of an efficient algorithm for the design optimization of gas turbine combustors. The purpose is to explore the possibilities and indicate constructive suggestions for optimization techniques as alternative methods for designing gas turbine combustors. The algorithm is general to the extent that no constraints are imposed on the combustion phenomena or on the combustor configuration. The optimization problem is broken down into two elementary problems: the first is the optimum search algorithm, and the second is the turbulent combustion model used to determine the combustor performance parameters. These performance parameters constitute the objective and physical constraints in the optimization problem formulation. The examination of both turbulent combustion phenomena and the gas turbine design process suggests that the turbulent combustion model represents a crucial part of the optimization algorithm. The basic requirements needed for a turbulent combustion model to be successfully used in a practical optimization algorithm are discussed. In principle, the combustion model should comply with the conflicting requirements of high fidelity, robustness and computational efficiency. To that end, the problem of turbulent combustion is discussed and the current state of the art of turbulent combustion modelling is reviewed. According to this review, turbulent combustion models based on the composition PDF transport equation are found to be good candidates for application in the present context. However, these models are computationally expensive. To overcome this difficulty, two different models based on the composition PDF transport equation were developed: an improved Lagrangian Monte Carlo composition PDF algorithm and the generalized stochastic reactor model. Improvements in the Lagrangian Monte Carlo composition PDF model performance and its computational efficiency were achieved through the implementation of time splitting, variable stochastic fluid particle mass control, and a second order time accurate (predictor-corrector) scheme used for solving the stochastic differential equations governing the particles evolution. The model compared well against experimental data found in the literature for two different configurations: bluff body and swirl stabilized combustors. The generalized stochastic reactor is a newly developed model. This model relies on the generalization of the concept of the classical stochastic reactor theory in the sense that it accounts for both finite micro- and macro-mixing processes. (Abstract shortened by UMI.)
Accelerating numerical solution of stochastic differential equations with CUDA
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2010-01-01
Numerical integration of stochastic differential equations is commonly used in many branches of science. In this paper we present how to accelerate this kind of numerical calculations with popular NVIDIA Graphics Processing Units using the CUDA programming environment. We address general aspects of numerical programming on stream processors and illustrate them by two examples: the noisy phase dynamics in a Josephson junction and the noisy Kuramoto model. In presented cases the measured speedup can be as high as 675× compared to a typical CPU, which corresponds to several billion integration steps per second. This means that calculations which took weeks can now be completed in less than one hour. This brings stochastic simulation to a completely new level, opening for research a whole new range of problems which can now be solved interactively. Program summaryProgram title: SDE Catalogue identifier: AEFG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu GPL v3 No. of lines in distributed program, including test data, etc.: 978 No. of bytes in distributed program, including test data, etc.: 5905 Distribution format: tar.gz Programming language: CUDA C Computer: any system with a CUDA-compatible GPU Operating system: Linux RAM: 64 MB of GPU memory Classification: 4.3 External routines: The program requires the NVIDIA CUDA Toolkit Version 2.0 or newer and the GNU Scientific Library v1.0 or newer. Optionally gnuplot is recommended for quick visualization of the results. Nature of problem: Direct numerical integration of stochastic differential equations is a computationally intensive problem, due to the necessity of calculating multiple independent realizations of the system. We exploit the inherent parallelism of this problem and perform the calculations on GPUs using the CUDA programming environment. The GPU's ability to execute hundreds of threads simultaneously makes it possible to speed up the computation by over two orders of magnitude, compared to a typical modern CPU. Solution method: The stochastic Runge-Kutta method of the second order is applied to integrate the equation of motion. Ensemble-averaged quantities of interest are obtained through averaging over multiple independent realizations of the system. Unusual features: The numerical solution of the stochastic differential equations in question is performed on a GPU using the CUDA environment. Running time: < 1 minute
Random search optimization based on genetic algorithm and discriminant function
NASA Technical Reports Server (NTRS)
Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.
1990-01-01
The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.
Distribution-dependent robust linear optimization with applications to inventory control
Kang, Seong-Cheol; Brisimi, Theodora S.
2014-01-01
This paper tackles linear programming problems with data uncertainty and applies it to an important inventory control problem. Each element of the constraint matrix is subject to uncertainty and is modeled as a random variable with a bounded support. The classical robust optimization approach to this problem yields a solution with guaranteed feasibility. As this approach tends to be too conservative when applications can tolerate a small chance of infeasibility, one would be interested in obtaining a less conservative solution with a certain probabilistic guarantee of feasibility. A robust formulation in the literature produces such a solution, but it does not use any distributional information on the uncertain data. In this work, we show that the use of distributional information leads to an equally robust solution (i.e., under the same probabilistic guarantee of feasibility) but with a better objective value. In particular, by exploiting distributional information, we establish stronger upper bounds on the constraint violation probability of a solution. These bounds enable us to “inject” less conservatism into the formulation, which in turn yields a more cost-effective solution (by 50% or more in some numerical instances). To illustrate the effectiveness of our methodology, we consider a discrete-time stochastic inventory control problem with certain quality of service constraints. Numerical tests demonstrate that the use of distributional information in the robust optimization of the inventory control problem results in 36%–54% cost savings, compared to the case where such information is not used. PMID:26347579
Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm
1978-09-01
deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research
Stochastic computing with biomolecular automata
Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud
2004-01-01
Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure. PMID:15215499
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Constraining Modified Theories of Gravity with Gravitational-Wave Stochastic Backgrounds
NASA Astrophysics Data System (ADS)
Maselli, Andrea; Marassi, Stefania; Ferrari, Valeria; Kokkotas, Kostas; Schneider, Raffaella
2016-08-01
The direct discovery of gravitational waves has finally opened a new observational window on our Universe, suggesting that the population of coalescing binary black holes is larger than previously expected. These sources produce an unresolved background of gravitational waves, potentially observable by ground-based interferometers. In this Letter we investigate how modified theories of gravity, modeled using the parametrized post-Einsteinian formalism, affect the expected signal, and analyze the detectability of the resulting stochastic background by current and future ground-based interferometers. We find the constraints that Advanced LIGO would be able to set on modified theories, showing that they may significantly improve the current bounds obtained from astrophysical observations of binary pulsars.
Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler
2016-09-01
This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less
On the Computational Complexity of Stochastic Scheduling Problems,
1981-09-01
Survey": 1979, Ann. Discrete Math . 5, pp. 287-326. i I (.4) Karp, R.M., "Reducibility Among Combinatorial Problems": 1972, R.E. Miller and J.W...Weighted Completion Time Subject to Precedence Constraints": 1978, Ann. Discrete Math . 2, pp. 75-90. (8) Lawler, E.L. and J.W. Moore, "A Functional
Trading strategies for distribution company with stochastic distributed energy resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chunyu; Wang, Qi; Wang, Jianhui
2016-09-01
This paper proposes a methodology to address the trading strategies of a proactive distribution company (PDISCO) engaged in the transmission-level (TL) markets. A one-leader multi-follower bilevel model is presented to formulate the gaming framework between the PDISCO and markets. The lower-level (LL) problems include the TL day-ahead market and scenario-based real-time markets, respectively with the objectives of maximizing social welfare and minimizing operation cost. The upper-level (UL) problem is to maximize the PDISCO’s profit across these markets. The PDISCO’s strategic offers/bids interactively influence the outcomes of each market. Since the LL problems are linear and convex, while the UL problemmore » is non-linear and non-convex, an equivalent primal–dual approach is used to reformulate this bilevel model to a solvable mathematical program with equilibrium constraints (MPEC). The effectiveness of the proposed model is verified by case studies.« less
Aziz, Sonia N; Boyle, Kevin J; Crocker, Tom
2015-03-01
Arsenic contamination of groundwater in Bangladesh is a widespread public health hazard. Water sources without high arsenic levels are scarce, affecting people's availability for work and other activities when they have to seek safe water to drink. While children are particularly susceptible to chronic arsenic exposure, limited information and heavy constraints on resources may preclude people in developing countries from taking protective actions. Since parents are primary decision-makers for children, a model of stochastic decision-making analytically linking parent health and child health is used to frame the valuation of avoiding arsenic exposure using an averting behavior model. The results show that safe drinking water programs do work and that people do take protective actions. The results can help guide public health mitigation policies, and examine whether factors such as child health and time required for remediation have an effect on mitigation measures.
Computer software tool REALM for sustainable water allocation and management.
Perera, B J C; James, B; Kularathna, M D U
2005-12-01
REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.
Multivariable optimization of an auto-thermal ammonia synthesis reactor using genetic algorithm
NASA Astrophysics Data System (ADS)
Anh-Nga, Nguyen T.; Tuan-Anh, Nguyen; Tien-Dung, Vu; Kim-Trung, Nguyen
2017-09-01
The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone. The optimal problem requires the maximization of a multivariable objective function which subjects to a number of equality constraints involving the solution of coupled differential equations and also inequality constraints. The solution of an optimization problem can be found through, among others, deterministic or stochastic approaches. The stochastic methods, such as evolutionary algorithm (EA), which is based on natural phenomenon, can overcome the drawbacks such as the requirement of the derivatives of the objective function and/or constraints, or being not efficient in non-differentiable or discontinuous problems. Genetic algorithm (GA) which is a class of EA, exceptionally simple, robust at numerical optimization and is more likely to find a true global optimum. In this study, the genetic algorithm is employed to find the optimum profit of the process. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results showed that the presented numerical method could be applied to model the ammonia synthesis reactor. The optimum economic profit obtained from this study are also compared to the results from the literature. It suggests that the process should be operated at higher temperature of feed gas in catalyst zone and the reactor length is slightly longer.
Fitting of full Cobb-Douglas and full VRTS cost frontiers by solving goal programming problem
NASA Astrophysics Data System (ADS)
Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Madhusudhana Rao, B.
2017-11-01
The present research article first defines two popular production functions viz, Cobb-Douglas and VRTS production frontiers and their dual cost functions and then derives their cost limited maximal outputs. This paper tells us that the cost limited maximal output is cost efficient. Here the one side goal programming problem is proposed by which the full Cobb-Douglas cost frontier, full VRTS frontier can be fitted. This paper includes the framing of goal programming by which stochastic cost frontier and stochastic VRTS frontiers are fitted. Hasan et al. [1] used a parameter approach Stochastic Frontier Approach (SFA) to examine the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur stock Exchange (KLSE) market over the period 2005-2010. AshkanHassani [2] exposed Cobb-Douglas Production Functions application in construction schedule crashing and project risk analysis related to the duration of construction projects. Nan Jiang [3] applied Stochastic Frontier analysis to a panel of New Zealand dairy forms in 1998/99-2006/2007.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Fiore, Andrew M; Swan, James W
2018-01-28
Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material.
Exact and approximate stochastic simulation of intracellular calcium dynamics.
Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von
2011-01-01
In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.
Accelerating deep neural network training with inconsistent stochastic gradient descent.
Wang, Linnan; Yang, Yi; Min, Renqiang; Chakradhar, Srimat
2017-09-01
Stochastic Gradient Descent (SGD) updates Convolutional Neural Network (CNN) with a noisy gradient computed from a random batch, and each batch evenly updates the network once in an epoch. This model applies the same training effort to each batch, but it overlooks the fact that the gradient variance, induced by Sampling Bias and Intrinsic Image Difference, renders different training dynamics on batches. In this paper, we develop a new training strategy for SGD, referred to as Inconsistent Stochastic Gradient Descent (ISGD) to address this problem. The core concept of ISGD is the inconsistent training, which dynamically adjusts the training effort w.r.t the loss. ISGD models the training as a stochastic process that gradually reduces down the mean of batch's loss, and it utilizes a dynamic upper control limit to identify a large loss batch on the fly. ISGD stays on the identified batch to accelerate the training with additional gradient updates, and it also has a constraint to penalize drastic parameter changes. ISGD is straightforward, computationally efficient and without requiring auxiliary memories. A series of empirical evaluations on real world datasets and networks demonstrate the promising performance of inconsistent training. Copyright © 2017 Elsevier Ltd. All rights reserved.
AESS: Accelerated Exact Stochastic Simulation
NASA Astrophysics Data System (ADS)
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution method: The Accelerated Exact Stochastic Simulation (AESS) tool provides implementations of a wide variety of popular variations on the Gillespie method. Users can select the specific algorithm considered most appropriate. Comparisons between the methods and with other available implementations indicate that AESS provides the fastest known implementation of Gillespie's method for a variety of test models. Users may wish to execute ensembles of simulations to sweep parameters or to obtain better statistical results, so AESS supports acceleration of ensembles of simulation using parallel processing with MPI, SSE vector units on x86 processors, and/or using NVIDIA GPUs with CUDA.
Accelerated probabilistic inference of RNA structure evolution
Holmes, Ian
2005-01-01
Background Pairwise stochastic context-free grammars (Pair SCFGs) are powerful tools for evolutionary analysis of RNA, including simultaneous RNA sequence alignment and secondary structure prediction, but the associated algorithms are intensive in both CPU and memory usage. The same problem is faced by other RNA alignment-and-folding algorithms based on Sankoff's 1985 algorithm. It is therefore desirable to constrain such algorithms, by pre-processing the sequences and using this first pass to limit the range of structures and/or alignments that can be considered. Results We demonstrate how flexible classes of constraint can be imposed, greatly reducing the computational costs while maintaining a high quality of structural homology prediction. Any score-attributed context-free grammar (e.g. energy-based scoring schemes, or conditionally normalized Pair SCFGs) is amenable to this treatment. It is now possible to combine independent structural and alignment constraints of unprecedented general flexibility in Pair SCFG alignment algorithms. We outline several applications to the bioinformatics of RNA sequence and structure, including Waterman-Eggert N-best alignments and progressive multiple alignment. We evaluate the performance of the algorithm on test examples from the RFAM database. Conclusion A program, Stemloc, that implements these algorithms for efficient RNA sequence alignment and structure prediction is available under the GNU General Public License. PMID:15790387
Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E
2017-10-01
In this paper, the event-based finite-horizon H ∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v) -dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H ∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.
Hidden symmetries and equilibrium properties of multiplicative white-noise stochastic processes
NASA Astrophysics Data System (ADS)
González Arenas, Zochil; Barci, Daniel G.
2012-12-01
Multiplicative white-noise stochastic processes continue to attract attention in a wide area of scientific research. The variety of prescriptions available for defining them makes the development of general tools for their characterization difficult. In this work, we study equilibrium properties of Markovian multiplicative white-noise processes. For this, we define the time reversal transformation for such processes, taking into account that the asymptotic stationary probability distribution depends on the prescription. Representing the stochastic process in a functional Grassmann formalism, we avoid the necessity of fixing a particular prescription. In this framework, we analyze equilibrium properties and study hidden symmetries of the process. We show that, using a careful definition of the equilibrium distribution and taking into account the appropriate time reversal transformation, usual equilibrium properties are satisfied for any prescription. Finally, we present a detailed deduction of a covariant supersymmetric formulation of a multiplicative Markovian white-noise process and study some of the constraints that it imposes on correlation functions using Ward-Takahashi identities.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
An invariance property of generalized Pearson random walks in bounded geometries
NASA Astrophysics Data System (ADS)
Mazzolo, Alain
2009-03-01
Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.
Removing Barriers for Effective Deployment of Intermittent Renewable Generation
NASA Astrophysics Data System (ADS)
Arabali, Amirsaman
The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.
Semiclassical Wheeler-DeWitt equation: Solutions for long-wavelength fields
NASA Astrophysics Data System (ADS)
Salopek, D. S.; Stewart, J. M.; Parry, J.
1993-07-01
In the long-wavelength approximation, a general set of semiclassical wave functionals is given for gravity and matter interacting in 3+1 dimensions. In the long-wavelength theory, one neglects second-order spatial gradients in the energy constraint. These solutions satisfy the Hamilton-Jacobi equation, the momentum constraint, and the equation of continuity. It is essential to introduce inhomogeneities to discuss the role of time. The time hypersurface is chosen to be a homogeneous field in the wave functional. It is shown how to introduce tracer particles through a dust field χ into the dynamical system. The formalism can be used to describe stochastic inflation.
Yu Wei; Michael Bevers; Erin J. Belval
2015-01-01
Initial attack dispatch rules can help shorten fire suppression response times by providing easy-to-follow recommendations based on fire weather, discovery time, location, and other factors that may influence fire behavior and the appropriate response. A new procedure is combined with a stochastic programming model and tested in this study for designing initial attack...
Fu, Zhenghui; Wang, Han; Lu, Wentao; Guo, Huaicheng; Li, Wei
2017-12-01
Electric power system involves different fields and disciplines which addressed the economic system, energy system, and environment system. Inner uncertainty of this compound system would be an inevitable problem. Therefore, an inexact multistage fuzzy-stochastic programming (IMFSP) was developed for regional electric power system management constrained by environmental quality. A model which concluded interval-parameter programming, multistage stochastic programming, and fuzzy probability distribution was built to reflect the uncertain information and dynamic variation in the case study, and the scenarios under different credibility degrees were considered. For all scenarios under consideration, corrective actions were allowed to be taken dynamically in accordance with the pre-regulated policies and the uncertainties in reality. The results suggest that the methodology is applicable to handle the uncertainty of regional electric power management systems and help the decision makers to establish an effective development plan.
Multicriteria approaches for a private equity fund
NASA Astrophysics Data System (ADS)
Tammer, Christiane; Tannert, Johannes
2012-09-01
We develop a new model for a Private Equity Fund based on stochastic differential equations. In order to find efficient strategies for the fund manager we formulate a multicriteria optimization problem for a Private Equity Fund. Using the e-constraint method we solve this multicriteria optimization problem. Furthermore, a genetic algorithm is applied in order to get an approximation of the efficient frontier.
Using Multi-Objective Genetic Programming to Synthesize Stochastic Processes
NASA Astrophysics Data System (ADS)
Ross, Brian; Imada, Janine
Genetic programming is used to automatically construct stochastic processes written in the stochastic π-calculus. Grammar-guided genetic programming constrains search to useful process algebra structures. The time-series behaviour of a target process is denoted with a suitable selection of statistical feature tests. Feature tests can permit complex process behaviours to be effectively evaluated. However, they must be selected with care, in order to accurately characterize the desired process behaviour. Multi-objective evaluation is shown to be appropriate for this application, since it permits heterogeneous statistical feature tests to reside as independent objectives. Multiple undominated solutions can be saved and evaluated after a run, for determination of those that are most appropriate. Since there can be a vast number of candidate solutions, however, strategies for filtering and analyzing this set are required.
Wang, Huanqing; Chen, Bing; Liu, Xiaoping; Liu, Kefu; Lin, Chong
2013-12-01
This paper is concerned with the problem of adaptive fuzzy tracking control for a class of pure-feedback stochastic nonlinear systems with input saturation. To overcome the design difficulty from nondifferential saturation nonlinearity, a smooth nonlinear function of the control input signal is first introduced to approximate the saturation function; then, an adaptive fuzzy tracking controller based on the mean-value theorem is constructed by using backstepping technique. The proposed adaptive fuzzy controller guarantees that all signals in the closed-loop system are bounded in probability and the system output eventually converges to a small neighborhood of the desired reference signal in the sense of mean quartic value. Simulation results further illustrate the effectiveness of the proposed control scheme.
Solution Methods for Stochastic Dynamic Linear Programs.
1980-12-01
16, No. 11, pp. 652-675, July 1970. [28] Glassey, C.R., "Dynamic linear programs for production scheduling", OR 19, pp. 45-56. 1971 . 129 Glassey, C.R...Huang, C.C., I. Vertinsky, W.T. Ziemba, ’Sharp bounds on the value of perfect information", OR 25, pp. 128-139, 1977. [37 Kall , P., ’Computational... 1971 . [701 Ziemba, W.T., *Computational algorithms for convex stochastic programs with simple recourse", OR 8, pp. 414-431, 1970. 131 UNCLASSI FIED
Conditioning 3D object-based models to dense well data
NASA Astrophysics Data System (ADS)
Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.
2018-06-01
Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.
FERN - a Java framework for stochastic simulation and evaluation of reaction networks.
Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf
2008-08-29
Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.
Bayard, David S; Schumitzky, Alan
2010-03-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches.
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury
2015-04-01
Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or the SDDP methods. The independent use of surface and groundwater can be examined with and without the aquifer. The ESPAT_DET, ESPATR and ESPAT_SDP modules were executed for the surface system, while the ESPAT_RA and the ESPAT_DET modules were run for the surface-groundwater system. The surface system's results show a similar performance between the ESPAT_SDP and ESPATR modules, with outperform the one showed by the current policies besides being outperformed by the ESPAT_DET results, which have the advantage of the perfect foresight. The surface-groundwater system's results show a robust situation in which the differences between the module's results and the current policies are lower due the use of pumped groundwater in the XX century crops when surface water is scarce. The results are realistic, with the deterministic optimization outperforming the stochastic one, which at the same time outperforms the current policies; showing that the tool is able to stochastically optimize river-aquifer water resources systems. We are currently working in the application of these tools in the analysis of changes in systems' operation under global change conditions. ACKNOWLEDGEMENT: This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) funds.
Model-based control strategies for systems with constraints of the program type
NASA Astrophysics Data System (ADS)
Jarzębowska, Elżbieta
2006-08-01
The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.
K-Minimax Stochastic Programming Problems
NASA Astrophysics Data System (ADS)
Nedeva, C.
2007-10-01
The purpose of this paper is a discussion of a numerical procedure based on the simplex method for stochastic optimization problems with partially known distribution functions. The convergence of this procedure is proved by the condition on dual problems.
NASA Technical Reports Server (NTRS)
Farhat, Nabil H.
1987-01-01
Self-organization and learning is a distinctive feature of neural nets and processors that sets them apart from conventional approaches to signal processing. It leads to self-programmability which alleviates the problem of programming complexity in artificial neural nets. In this paper architectures for partitioning an optoelectronic analog of a neural net into distinct layers with prescribed interconnectivity pattern to enable stochastic learning by simulated annealing in the context of a Boltzmann machine are presented. Stochastic learning is of interest because of its relevance to the role of noise in biological neural nets. Practical considerations and methodologies for appreciably accelerating stochastic learning in such a multilayered net are described. These include the use of parallel optical computing of the global energy of the net, the use of fast nonvolatile programmable spatial light modulators to realize fast plasticity, optical generation of random number arrays, and an adaptive noisy thresholding scheme that also makes stochastic learning more biologically plausible. The findings reported predict optoelectronic chips that can be used in the realization of optical learning machines.
2010-11-01
Novembre 2010. Contexte: La puissance des ordinateurs nous permet aujourd’hui d’étudier des problèmes pour lesquels une solution analytique n’existe... 13 4.8 Proof of Corollary........................................................................................................ 13 ...optimal capacities for links. e DRDC CORA TM 2010-249 13 4.9 Example Figure 4 below shows that the probability of achieving the optimal
High-Frequency Sound Interaction in Ocean Sediments
2002-09-30
sediment attenuation (10-300 kHz) and sound speed (10-300 kHz) and determine constraints imposed on sediment acoustic models , such as poroelastic (Biot...by poroelastic seafloors: First-order theory,” accepted for publication in J. Acoust . Soc. Am. 5. K. L. Williams, “An effective density fluid model ... poroelastic sediment models , the appropriateness of stochastic descriptions of sediment heterogeneities, the importance of single versus multiple
NASA Astrophysics Data System (ADS)
Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro
2017-08-01
This third part extends the theory of Generalized Poisson-Kac (GPK) processes to nonlinear stochastic models and to a continuum of states. Nonlinearity is treated in two ways: (i) as a dependence of the parameters (intensity of the stochastic velocity, transition rates) of the stochastic perturbation on the state variable, similarly to the case of nonlinear Langevin equations, and (ii) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself (nonlinear Fokker-Planck-Kac models). Several numerical and physical examples illustrate the theory. Gathering nonlinearity and a continuum of states, GPK theory provides a stochastic derivation of the nonlinear Boltzmann equation, furnishing a positive answer to the Kac’s program in kinetic theory. The transition from stochastic microdynamics to transport theory within the framework of the GPK paradigm is also addressed.
Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model
NASA Astrophysics Data System (ADS)
Hung, F.; Hobbs, B. F.; McGarity, A. E.
2014-12-01
In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and chance constraints are placed on the objectives to achieve desired confidence levels. By varying the budgets, reliability constraints, and priorities among other objectives, we generate a range of GI deployment strategies that represent tradeoffs among objectives as well as the confidence in achieving them.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Planning and Scheduling for Fleets of Earth Observing Satellites
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)
2001-01-01
We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.
Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance
2003-07-21
Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance Vincent A. Cicirello CMU-RI-TR-03-27 Submitted in partial fulfillment...AND SUBTITLE Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...lead to the development of a search control framework, called QD-BEACON that uses online -generated statistical models of search performance to
New Results on a Stochastic Duel Game with Each Force Consisting of Heterogeneous Units
2013-02-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA NEW RESULTS ON A STOCHASTIC DUEL GAME WITH EACH FORCE CONSISTING OF...on a Stochastic Duel Game With Each Force Consisting of Heterogeneous Units 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...distribution is unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Two forces engage in a duel , with each force initially consisting of several
FINITE-STATE APPROXIMATIONS TO DENUMERABLE-STATE DYNAMIC PROGRAMS,
AIR FORCE OPERATIONS, LOGISTICS), (*INVENTORY CONTROL, DYNAMIC PROGRAMMING), (*DYNAMIC PROGRAMMING, APPROXIMATION(MATHEMATICS)), INVENTORY CONTROL, DECISION MAKING, STOCHASTIC PROCESSES, GAME THEORY, ALGORITHMS, CONVERGENCE
NASA Astrophysics Data System (ADS)
Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris
2016-04-01
Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.
A reliability-based cost effective fail-safe design procedure
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1976-01-01
The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Yu, E-mail: yu.pan@anu.edu.au, E-mail: zibo.miao@anu.edu.au; Miao, Zibo, E-mail: yu.pan@anu.edu.au, E-mail: zibo.miao@anu.edu.au; Amini, Hadis, E-mail: nhamini@stanford.edu
Quantum Markovian systems, modeled as unitary dilations in the quantum stochastic calculus of Hudson and Parthasarathy, have become standard in current quantum technological applications. This paper investigates the stability theory of such systems. Lyapunov-type conditions in the Heisenberg picture are derived in order to stabilize the evolution of system operators as well as the underlying dynamics of the quantum states. In particular, using the quantum Markov semigroup associated with this quantum stochastic differential equation, we derive sufficient conditions for the existence and stability of a unique and faithful invariant quantum state. Furthermore, this paper proves the quantum invariance principle, whichmore » extends the LaSalle invariance principle to quantum systems in the Heisenberg picture. These results are formulated in terms of algebraic constraints suitable for engineering quantum systems that are used in coherent feedback networks.« less
A stochastic agent-based model of pathogen propagation in dynamic multi-relational social networks
Khan, Bilal; Dombrowski, Kirk; Saad, Mohamed
2015-01-01
We describe a general framework for modeling and stochastic simulation of epidemics in realistic dynamic social networks, which incorporates heterogeneity in the types of individuals, types of interconnecting risk-bearing relationships, and types of pathogens transmitted across them. Dynamism is supported through arrival and departure processes, continuous restructuring of risk relationships, and changes to pathogen infectiousness, as mandated by natural history; dynamism is regulated through constraints on the local agency of individual nodes and their risk behaviors, while simulation trajectories are validated using system-wide metrics. To illustrate its utility, we present a case study that applies the proposed framework towards a simulation of HIV in artificial networks of intravenous drug users (IDUs) modeled using data collected in the Social Factors for HIV Risk survey. PMID:25859056
Greek classicism in living structure? Some deductive pathways in animal morphology.
Zweers, G A
1985-01-01
Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".
Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming
NASA Astrophysics Data System (ADS)
Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita
2018-03-01
We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.
NASA Astrophysics Data System (ADS)
Fiore, Andrew M.; Swan, James W.
2018-01-01
Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)
A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints
NASA Astrophysics Data System (ADS)
Estiningsih, Y.; Farikhin; Tjahjana, R. H.
2018-03-01
Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu; Yeh, Cheng-Ta
2016-04-01
In supply chain management, satisfying customer demand is the most concerned for the manager. However, the goods may rot or be spoilt during delivery owing to natural disasters, inclement weather, traffic accidents, collisions, and so on, such that the intact goods may not meet market demand. This paper concentrates on a stochastic-flow distribution network (SFDN), in which a node denotes a supplier, a transfer station, or a market, while a route denotes a carrier providing the delivery service for a pair of nodes. The available capacity of the carrier is stochastic because the capacity may be partially reserved by other customers. The addressed problem is to evaluate the system reliability, the probability that the SFDN can satisfy the market demand with the spoilage rate under the budget constraint from multiple suppliers to the customer. An algorithm is developed in terms of minimal paths to evaluate the system reliability along with a numerical example to illustrate the solution procedure. A practical case of fruit distribution is presented accordingly to emphasise the management implication of the system reliability.
Stochastic Simulation of Biomolecular Networks in Dynamic Environments
Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G.
2016-01-01
Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate—using decision-making by a large population of quorum sensing bacteria—that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits. PMID:27248512
Necessary conditions for the emergence of homochirality via autocatalytic self-replication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stich, Michael; Ribó, Josep M.; Blackmond, Donna G., E-mail: blackmond@scripps.edu
We analyze a recent proposal for spontaneous mirror symmetry breaking based on the coupling of first-order enantioselective autocatalysis and direct production of the enantiomers that invokes a critical role for intrinsic reaction noise. For isolated systems, the racemic state is the unique stable outcome for both stochastic and deterministic dynamics when the system is in compliance with the constraints dictated by the thermodynamics of chemical reaction processes. In open systems, the racemic outcome also results for both stochastic and deterministic dynamics when driving the autocatalysis unidirectionally by external reagents. Nonracemic states can result in the latter only if the reversemore » reactions are strictly zero: these are kinetically controlled outcomes for small populations and volumes, and can be simulated by stochastic dynamics. However, the stability of the thermodynamic limit proves that the racemic outcome is the unique stable state for strictly irreversible externally driven autocatalysis. These findings contradict the suggestion that the inhibition requirement of the Frank autocatalytic model for the emergence of homochirality may be relaxed in a noise-induced mechanism.« less
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
Solving multiconstraint assignment problems using learning automata.
Horn, Geir; Oommen, B John
2010-02-01
This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.
Rebuilding the NAVSEA Early Stage Ship Design Environment
2010-04-01
rules -of- thumb to base these crucial decisions upon. With High Performance Computing (HPC) as an enabler, the vision is to explore all downstream...the results of the analysis back into LEAPS. Another software development worthy of discussion here is Intelligent Ship Arrangements ( ISA ), which...constraints and rules set by the users ahead of time. When used in a systematic and stochastic way, and when integrated using LEAPS, having this
Population dynamics of obligate cooperators
Courchamp, F.; Grenfell, B.; Clutton-Brock, T.
1999-01-01
Obligate cooperative breeding species demonstrate a high rate of group extinction, which may be due to the existence of a critical number of helpers below which the group cannot subsist. Through a simple model, we study the population dynamics of obligate cooperative breeding species, taking into account the existence of a lower threshold below which the instantaneous growth rate becomes negative. The model successively incorporates (i) a distinction between species that need helpers for reproduction, survival or both, (ii) the existence of a migration rate accounting for dispersal, and (iii) stochastic mortality to simulate the effects of random catastrophic events. Our results suggest that the need for a minimum number of helpers increases the risk of extinction for obligate cooperative breeding species. The constraint imposed by this threshold is higher when helpers are needed for reproduction only or for both reproduction and survival. By driving them below this lower threshold, stochastic mortality of lower amplitude and/or lower frequency than for non-cooperative breeders may be sufficient to cause the extinction of obligate cooperative breeding groups. Migration may have a buffering effect only for groups where immigration is higher than emigration; otherwise (when immigrants from nearby groups are not available) it lowers the difference between actual group size and critical threshold, thereby constituting a higher constraint.
A guide to differences between stochastic point-source and stochastic finite-fault simulations
Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.
2009-01-01
Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
NASA Astrophysics Data System (ADS)
Wang, Meng; Zhang, Huaiqiang; Zhang, Kan
2017-10-01
Focused on the circumstance that the equipment using demand in the short term and the development demand in the long term should be made overall plans and took into consideration in the weapons portfolio planning and the practical problem of the fuzziness in the definition of equipment capacity demand. The expression of demand is assumed to be an interval number or a discrete number. With the analysis method of epoch-era, a long planning cycle is broke into several short planning cycles with different demand value. The multi-stage stochastic programming model is built aimed at maximize long-term planning cycle demand under the constraint of budget, equipment development time and short planning cycle demand. The scenario tree is used to discretize the interval value of the demand, and genetic algorithm is designed to solve the problem. At last, a case is studied to demonstrate the feasibility and effectiveness of the proposed mode.
Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W
2016-05-01
In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.
An inexact reverse logistics model for municipal solid waste management systems.
Zhang, Yi Mei; Huang, Guo He; He, Li
2011-03-01
This paper proposed an inexact reverse logistics model for municipal solid waste management systems (IRWM). Waste managers, suppliers, industries and distributors were involved in strategic planning and operational execution through reverse logistics management. All the parameters were assumed to be intervals to quantify the uncertainties in the optimization process and solutions in IRWM. To solve this model, a piecewise interval programming was developed to deal with Min-Min functions in both objectives and constraints. The application of the model was illustrated through a classical municipal solid waste management case. With different cost parameters for landfill and the WTE, two scenarios were analyzed. The IRWM could reflect the dynamic and uncertain characteristics of MSW management systems, and could facilitate the generation of desired management plans. The model could be further advanced through incorporating methods of stochastic or fuzzy parameters into its framework. Design of multi-waste, multi-echelon, multi-uncertainty reverse logistics model for waste management network would also be preferred. Copyright © 2010 Elsevier Ltd. All rights reserved.
Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard
This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less
Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique
NASA Astrophysics Data System (ADS)
Boulanger, Olivier
The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Zhu, Z. W.; Zhang, W. D.; Xu, J.
2014-03-01
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.
ASSESSING RESIDENTIAL EXPOSURE USING THE STOCHASTIC HUMAN EXPOSURE AND DOSE SIMULATION (SHEDS) MODEL
As part of a workshop sponsored by the Environmental Protection Agency's Office of Research and Development and Office of Pesticide Programs, the Aggregate Stochastic Human Exposure and Dose Simulation (SHEDS) Model was used to assess potential aggregate residential pesticide e...
Stochastic models of the Social Security trust funds.
Burdick, Clark; Manchester, Joyce
Each year in March, the Board of Trustees of the Social Security trust funds reports on the current and projected financial condition of the Social Security programs. Those programs, which pay monthly benefits to retired workers and their families, to the survivors of deceased workers, and to disabled workers and their families, are financed through the Old-Age, Survivors, and Disability Insurance (OASDI) Trust Funds. In their 2003 report, the Trustees present, for the first time, results from a stochastic model of the combined OASDI trust funds. Stochastic modeling is an important new tool for Social Security policy analysis and offers the promise of valuable new insights into the financial status of the OASDI trust funds and the effects of policy changes. The results presented in this article demonstrate that several stochastic models deliver broadly consistent results even though they use very different approaches and assumptions. However, they also show that the variation in trust fund outcomes differs as the approach and assumptions are varied. Which approach and assumptions are best suited for Social Security policy analysis remains an open question. Further research is needed before the promise of stochastic modeling is fully realized. For example, neither parameter uncertainty nor variability in ultimate assumption values is recognized explicitly in the analyses. Despite this caveat, stochastic modeling results are already shedding new light on the range and distribution of trust fund outcomes that might occur in the future.
On stochastic control and optimal measurement strategies. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kramer, L. C.
1971-01-01
The control of stochastic dynamic systems is studied with particular emphasis on those which influence the quality or nature of the measurements which are made to effect control. Four main areas are discussed: (1) the meaning of stochastic optimality and the means by which dynamic programming may be applied to solve a combined control/measurement problem; (2) a technique by which it is possible to apply deterministic methods, specifically the minimum principle, to the study of stochastic problems; (3) the methods described are applied to linear systems with Gaussian disturbances to study the structure of the resulting control system; and (4) several applications are considered.
Stochastic Robust Mathematical Programming Model for Power System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Cong; Changhyeok, Lee; Haoyong, Chen
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Choi, Mi-Ri; Jeon, Sang-Wan; Yi, Eun-Surk
2018-04-01
The purpose of this study is to analyze the differences among the hospitalized cancer patients on their perception of exercise and physical activity constraints based on their medical history. The study used questionnaire survey as measurement tool for 194 cancer patients (male or female, aged 20 or older) living in Seoul metropolitan area (Seoul, Gyeonggi, Incheon). The collected data were analyzed using frequency analysis, exploratory factor analysis, reliability analysis t -test, and one-way distribution using statistical program SPSS 18.0. The following results were obtained. First, there was no statistically significant difference between cancer stage and exercise recognition/physical activity constraint. Second, there was a significant difference between cancer stage and sociocultural constraint/facility constraint/program constraint. Third, there was a significant difference between cancer operation history and physical/socio-cultural/facility/program constraint. Fourth, there was a significant difference between cancer operation history and negative perception/facility/program constraint. Fifth, there was a significant difference between ancillary cancer treatment method and negative perception/facility/program constraint. Sixth, there was a significant difference between hospitalization period and positive perception/negative perception/physical constraint/cognitive constraint. In conclusion, this study will provide information necessary to create patient-centered healthcare service system by analyzing exercise recognition of hospitalized cancer patients based on their medical history and to investigate the constraint factors that prevents patients from actually making efforts to exercise.
NASA Technical Reports Server (NTRS)
Cairns, Iver H.; Robinson, P. A.
1998-01-01
Existing, competing theories for coronal and interplanetary type III solar radio bursts appeal to one or more of modulational instability, electrostatic (ES) decay processes, or stochastic growth physics to preserve the electron beam, limit the levels of Langmuir-like waves driven by the beam, and produce wave spectra capable of coupling nonlinearly to generate the observed radio emission. Theoretical constraints exist on the wavenumbers and relative sizes of the wave bandwidth and nonlinear growth rate for which Langmuir waves are subject to modulational instability and the parametric and random phase versions of ES decay. A constraint also exists on whether stochastic growth theory (SGT) is appropriate. These constraints are evaluated here using the beam, plasma, and wave properties (1) observed in specific interplanetary type III sources, (2) predicted nominally for the corona, and (3) predicted at heliocentric distances greater than a few solar radii by power-law models based on interplanetary observations. It is found that the Langmuir waves driven directly by the beam have wavenumbers that are almost always too large for modulational instability but are appropriate to ES decay. Even for waves scattered to lower wavenumbers (by ES decay, for instance), the wave bandwidths are predicted to be too large and the nonlinear growth rates too small for modulational instability to occur for the specific interplanetary events studied or the great majority of Langmuir wave packets in type III sources at arbitrary heliocentric distances. Possible exceptions are for very rare, unusually intense, narrowband wave packets, predominantly close to the Sun, and for the front portion of very fast beams traveling through unusually dilute, cold solar wind plasmas. Similar arguments demonstrate that the ES decay should proceed almost always as a random phase process rather than a parametric process, with similar exceptions. These results imply that it is extremely rare for modulational instability or parametric decay to proceed in type III sources at any heliocentric distance: theories for type III bursts based on modulational instability or parametric decay are therefore not viable in general. In contrast, the constraint on SGT can be satisfied and random phase ES decay can proceed at all heliocentric distances under almost all circumstances. (The contrary circumstances involve unusually slow, broad beams moving through unusually hot regions of the Corona.) The analyses presented here strongly justify extending the existing SGT-based model for interplanetary type III bursts (which includes SGT physics, random phase ES decay, and specific electromagnetic emission mechanisms) into a general theory for type III bursts from the corona to beyond 1 AU. This extended theory enjoys strong theoretical support, explains the characteristics of specific interplanetary type III bursts very well, and can account for the detailed dynamic spectra of type III bursts from the lower corona and solar wind.
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
Noise effects in nonlinear biochemical signaling
NASA Astrophysics Data System (ADS)
Bostani, Neda; Kessler, David A.; Shnerb, Nadav M.; Rappel, Wouter-Jan; Levine, Herbert
2012-01-01
It has been generally recognized that stochasticity can play an important role in the information processing accomplished by reaction networks in biological cells. Most treatments of that stochasticity employ Gaussian noise even though it is a priori obvious that this approximation can violate physical constraints, such as the positivity of chemical concentrations. Here, we show that even when such nonphysical fluctuations are rare, an exact solution of the Gaussian model shows that the model can yield unphysical results. This is done in the context of a simple incoherent-feedforward model which exhibits perfect adaptation in the deterministic limit. We show how one can use the natural separation of time scales in this model to yield an approximate model, that is analytically solvable, including its dynamical response to an environmental change. Alternatively, one can employ a cutoff procedure to regularize the Gaussian result.
Plasma Equilibrium in a Magnetic Field with Stochastic Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.A. Krommes and Allan H. Reiman
The nature of plasma equilibrium in a magnetic field with stochastic regions is examined. It is shown that the magnetic differential equation that determines the equilibrium Pfirsch-Schluter currents can be cast in a form similar to various nonlinear equations for a turbulent plasma, allowing application of the mathematical methods of statistical turbulence theory. An analytically tractable model, previously studied in the context of resonance-broadening theory, is applied with particular attention paid to the periodicity constraints required in toroidal configurations. It is shown that even a very weak radial diffusion of the magnetic field lines can have a significant effect onmore » the equilibrium in the neighborhood of the rational surfaces, strongly modifying the near-resonant Pfirsch-Schluter currents. Implications for the numerical calculation of 3D equilibria are discussed« less
Integration of progressive hedging and dual decomposition in stochastic integer programs
Watson, Jean -Paul; Guo, Ge; Hackebeil, Gabriel; ...
2015-04-07
We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.
Learning Structured Classifiers with Dual Coordinate Ascent
2010-06-01
stochastic gradient descent (SGD) [LeCun et al., 1998], and the margin infused relaxed algorithm (MIRA) [ Crammer et al., 2006]. This paper presents a...evaluate these methods on the Prague Dependency Treebank us- ing online large-margin learning tech- niques ( Crammer et al., 2003; McDonald et al., 2005...between two kinds of factors: hard constraint factors, which are used to rule out forbidden par- tial assignments by mapping them to zero potential values
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Gaussian Random Fields Methods for Fork-Join Network with Synchronization Constraints
2014-12-22
substantial efforts were dedicated to the study of the max-plus recursions [21, 3, 12]. More recently, Atar et al. [2] have studied a fork-join...feedback and NES, Atar et al. [2] show that a dynamic priority discipline achieves throughput optimal- ity asymptotically in the conventional heavy...2011) Patient flow in hospitals: a data-based queueing-science perspective. Submitted to Stochastic Systems, 20. [2] R. Atar , A. Mandelbaum and A
Logistical constraints lead to an intermediate optimum in outbreak response vaccination
Shea, Katriona; Ferrari, Matthew
2018-01-01
Dynamic models in disease ecology have historically evaluated vaccination strategies under the assumption that they are implemented homogeneously in space and time. However, this approach fails to formally account for operational and logistical constraints inherent in the distribution of vaccination to the population at risk. Thus, feedback between the dynamic processes of vaccine distribution and transmission might be overlooked. Here, we present a spatially explicit, stochastic Susceptible-Infected-Recovered-Vaccinated model that highlights the density-dependence and spatial constraints of various diffusive strategies of vaccination during an outbreak. The model integrates an agent-based process of disease spread with a partial differential process of vaccination deployment. We characterize the vaccination response in terms of a diffusion rate that describes the distribution of vaccination to the population at risk from a central location. This generates an explicit trade-off between slow diffusion, which concentrates effort near the central location, and fast diffusion, which spreads a fixed vaccination effort thinly over a large area. We use stochastic simulation to identify the optimum vaccination diffusion rate as a function of population density, interaction scale, transmissibility, and vaccine intensity. Our results show that, conditional on a timely response, the optimal strategy for minimizing outbreak size is to distribute vaccination resource at an intermediate rate: fast enough to outpace the epidemic, but slow enough to achieve local herd immunity. If the response is delayed, however, the optimal strategy for minimizing outbreak size changes to a rapidly diffusive distribution of vaccination effort. The latter may also result in significantly larger outbreaks, thus suggesting a benefit of allocating resources to timely outbreak detection and response. PMID:29791432
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less
Search Planning Under Incomplete Information Using Stochastic Optimization and Regression
2011-09-01
solve since they involve un- certainty and unknown parameters (see for example Shapiro et al., 2009; Wallace & Ziemba , 2005). One application area is...M16130.2E. 43 Wallace, S. W., & Ziemba , W. T. (2005). Applications of stochastic programming. Philadelphia, PA: Society for Industrial and Applied
Chen, Jianjun; Frey, H Christopher
2004-12-15
Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.
XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations
NASA Astrophysics Data System (ADS)
Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.
2013-01-01
XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem
Constraint Logic Programming approach to protein structure prediction.
Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico
2004-11-30
The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com
2014-03-15
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less
NASA Astrophysics Data System (ADS)
Champion, Billy Ray
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. . Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. . The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of "of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency's traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Control of Vibratory Energy Harvesters in the Presence of Nonlinearities and Power-Flow Constraints
NASA Astrophysics Data System (ADS)
Cassidy, Ian L.
Over the past decade, a significant amount of research activity has been devoted to developing electromechanical systems that can convert ambient mechanical vibrations into usable electric power. Such systems, referred to as vibratory energy harvesters, have a number of useful of applications, ranging in scale from self-powered wireless sensors for structural health monitoring in bridges and buildings to energy harvesting from ocean waves. One of the most challenging aspects of this technology concerns the efficient extraction and transmission of power from transducer to storage. Maximizing the rate of power extraction from vibratory energy harvesters is further complicated by the stochastic nature of the disturbance. The primary purpose of this dissertation is to develop feedback control algorithms which optimize the average power generated from stochastically-excited vibratory energy harvesters. This dissertation will illustrate the performance of various controllers using two vibratory energy harvesting systems: an electromagnetic transducer embedded within a flexible structure, and a piezoelectric bimorph cantilever beam. Compared with piezoelectric systems, large-scale electromagnetic systems have received much less attention in the literature despite their ability to generate power at the watt--kilowatt scale. Motivated by this observation, the first part of this dissertation focuses on developing an experimentally validated predictive model of an actively controlled electromagnetic transducer. Following this experimental analysis, linear-quadratic-Gaussian control theory is used to compute unconstrained state feedback controllers for two ideal vibratory energy harvesting systems. This theory is then augmented to account for competing objectives, nonlinearities in the harvester dynamics, and non-quadratic transmission loss models in the electronics. In many vibratory energy harvesting applications, employing a bi-directional power electronic drive to actively control the harvester is infeasible due to the high levels of parasitic power required to operate the drive. For the case where a single-directional drive is used, a constraint on the directionality of power-flow is imposed on the system, which necessitates the use of nonlinear feedback. As such, a sub-optimal controller for power-flow-constrained vibratory energy harvesters is presented, which is analytically guaranteed to outperform the optimal static admittance controller. Finally, the last section of this dissertation explores a numerical approach to compute optimal discretized control manifolds for systems with power-flow constraints. Unlike the sub-optimal nonlinear controller, the numerical controller satisfies the necessary conditions for optimality by solving the stochastic Hamilton-Jacobi equation.
Optimizing Wind And Hydropower Generation Within Realistic Reservoir Operating Policy
NASA Astrophysics Data System (ADS)
Magee, T. M.; Clement, M. A.; Zagona, E. A.
2012-12-01
Previous studies have evaluated the benefits of utilizing the flexibility of hydropower systems to balance the variability and uncertainty of wind generation. However, previous hydropower and wind coordination studies have simplified non-power constraints on reservoir systems. For example, some studies have only included hydropower constraints on minimum and maximum storage volumes and minimum and maximum plant discharges. The methodology presented here utilizes the pre-emptive linear goal programming optimization solver in RiverWare to model hydropower operations with a set of prioritized policy constraints and objectives based on realistic policies that govern the operation of actual hydropower systems, including licensing constraints, environmental constraints, water management and power objectives. This approach accounts for the fact that not all policy constraints are of equal importance. For example target environmental flow levels may not be satisfied if it would require violating license minimum or maximum storages (pool elevations), but environmental flow constraints will be satisfied before optimizing power generation. Additionally, this work not only models the economic value of energy from the combined hydropower and wind system, it also captures the economic value of ancillary services provided by the hydropower resources. It is recognized that the increased variability and uncertainty inherent with increased wind penetration levels requires an increase in ancillary services. In regions with liberalized markets for ancillary services, a significant portion of hydropower revenue can result from providing ancillary services. Thus, ancillary services should be accounted for when determining the total value of a hydropower system integrated with wind generation. This research shows that the end value of integrated hydropower and wind generation is dependent on a number of factors that can vary by location. Wind factors include wind penetration level, variability due to geographic distribution of wind resources, and forecast error. Electric power system factors include the mix of thermal generation resources, available transmission, demand patterns, and market structures. Hydropower factors include relative storage capacity, reservoir operating policies and hydrologic conditions. In addition, the wind, power system, and hydropower factors are often interrelated because stochastic weather patterns can simultaneously influence wind generation, power demand, and hydrologic inflows. One of the central findings is that the sensitivity of the model to changes cannot be performed one factor at a time because the impact of the factors is highly interdependent. For example, the net value of wind generation may be very sensitive to changes in transmission capacity under some hydrologic conditions, but not at all under others.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Parameter-based stochastic simulation of selection and breeding for multiple traits
Jennifer Myszewski; Thomas Byram; Floyd Bridgwater
2006-01-01
To increase the adaptability and economic value of plantations, tree improvement professionals often manage multiple traits in their breeding programs. When these traits are unfavorably correlated, breeders must weigh the economic importance of each trait and select for a desirable aggregate phenotype. Stochastic simulation allows breeders to test the effects of...
Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.
Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D
2011-05-01
Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.
Optimal Control via Self-Generated Stochasticity
NASA Technical Reports Server (NTRS)
Zak, Michail
2011-01-01
The problem of global maxima of functionals has been examined. Mathematical roots of local maxima are the same as those for a much simpler problem of finding global maximum of a multi-dimensional function. The second problem is instability even if an optimal trajectory is found, there is no guarantee that it is stable. As a result, a fundamentally new approach is introduced to optimal control based upon two new ideas. The first idea is to represent the functional to be maximized as a limit of a probability density governed by the appropriately selected Liouville equation. Then, the corresponding ordinary differential equations (ODEs) become stochastic, and that sample of the solution that has the largest value will have the highest probability to appear in ODE simulation. The main advantages of the stochastic approach are that it is not sensitive to local maxima, the function to be maximized must be only integrable but not necessarily differentiable, and global equality and inequality constraints do not cause any significant obstacles. The second idea is to remove possible instability of the optimal solution by equipping the control system with a self-stabilizing device. The applications of the proposed methodology will optimize the performance of NASA spacecraft, as well as robot performance.
Basic Research in Digital Stochastic Model Algorithmic Control.
1980-11-01
IDCOM Description 115 8.2 Basic Control Computation 117 8.3 Gradient Algorithm 119 8.4 Simulation Model 119 8.5 Model Modifications 123 8.6 Summary 124...constraints, and 3) control traJectorv comouta- tion. 2.1.1 Internal Model of the System The multivariable system to be controlled is represented by a...more flexible and adaptive, since the model , criteria, and sampling rates can be adjusted on-line. This flexibility comes from the use of the impulse
Constraints in Genetic Programming
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.
1996-01-01
Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
GillesPy: A Python Package for Stochastic Model Building and Simulation.
Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R
2016-09-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.
GillesPy: A Python Package for Stochastic Model Building and Simulation
Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.
2017-01-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community. PMID:28630888
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
Stochastic models for tumoral growth
NASA Astrophysics Data System (ADS)
Escudero, Carlos
2006-02-01
Strong experimental evidence has indicated that tumor growth belongs to the molecular beam epitaxy universality class. This type of growth is characterized by the constraint of cell proliferation to the tumor border and the surface diffusion of cells at the growing edge. Tumor growth is thus conceived as a competition for space between the tumor and the host, and cell diffusion at the tumor border is an optimal strategy adopted for minimizing the pressure and helping tumor development. Two stochastic partial differential equations are reported in this paper in order to correctly model the physical properties of tumoral growth in (1+1) and (2+1) dimensions. The advantage of these models is that they reproduce the correct geometry of the tumor and are defined in terms of polar variables. An analysis of these models allows us to quantitatively estimate the response of the tumor to an unfavorable perturbation during growth.
Constraining stochastic gravitational wave background from weak lensing of CMB B-modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaikh, Shabbir; Mukherjee, Suvodip; Souradeep, Tarun
2016-09-01
A stochastic gravitational wave background (SGWB) will affect the CMB anisotropies via weak lensing. Unlike weak lensing due to large scale structure which only deflects photon trajectories, a SGWB has an additional effect of rotating the polarization vector along the trajectory. We study the relative importance of these two effects, deflection and rotation, specifically in the context of E-mode to B-mode power transfer caused by weak lensing due to SGWB. Using weak lensing distortion of the CMB as a probe, we derive constraints on the spectral energy density (Ω{sub GW}) of the SGWB, sourced at different redshifts, without assuming anymore » particular model for its origin. We present these bounds on Ω{sub GW} for different power-law models characterizing the SGWB, indicating the threshold above which observable imprints of SGWB must be present in CMB.« less
NASA Astrophysics Data System (ADS)
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response theory for small deviations from equilibrium, in which a general framework is constructed from the analysis of non-equilibrium states close to equilibrium. In a next step, Prigogine and others developed linear irreversible thermodynamics, which establishes relations between transport coefficients and entropy production on a phenomenological level in terms of thermodynamic forces and fluxes. However, beyond the realm of linear response no general theoretical results were available for quite a long time. This situation has changed drastically over the last 20 years with the development of stochastic thermodynamics, revealing that the range of validity of thermodynamic statements can indeed be extended deep into the non-equilibrium regime. Early developments in that direction trace back to the observations of symmetry relations between the probabilities for entropy production and entropy annihilation in non-equilibrium steady states [5-8] (nowadays categorized in the class of so-called detailed fluctuation theorems), and the derivations of the Bochkov-Kuzovlev [9, 10] and Jarzynski relations [11] (which are now classified as so-called integral fluctuation theorems). Apart from its fundamental theoretical interest, the developments in stochastic thermodynamics have experienced an additional boost from the recent experimental progress in fabricating, manipulating, controlling and observing systems on the micro- and nano-scale. These advances are not only of formidable use for probing and monitoring biological processes on the cellular, sub-cellular and molecular level, but even include the realization of a microscopic thermodynamic heat engine [12] or the experimental verification of Landauer's principle in a colloidal system [13]. The scientific program Stochastic Thermodynamics held between 4 and 15 March 2013, and hosted by The Nordic Institute for Theoretical Physics (Nordita), was attended by more than 50 scientists from the Nordic countries and elsewhere, amongst them many leading experts in the field. During the program, the most recent developments, open questions and new ideas in stochastic thermodynamics were presented and discussed. From the talks and debates, the notion of information in stochastic thermodynamics, the fundamental properties of entropy production (rate) in non-equilibrium, the efficiency of small thermodynamic machines and the characteristics of optimal protocols for the applied (cyclic) forces were crystallizing as main themes. Surprisingly, the long-studied adiabatic piston, its peculiarities and its relation to stochastic thermodynamics were also the subject of intense discussions. The comment on the Nordita program Stochastic Thermodynamics published in this issue of Physica Scripta exploits the Jarzynski relation for determining free energy differences in the adiabatic piston. This scientific program and the contribution presented here were made possible by the financial and administrative support of The Nordic Institute for Theoretical Physics.
Energetic and ecological constraints on population density of reef fishes.
Barneche, D R; Kulbicki, M; Floeter, S R; Friedlander, A M; Allen, A P
2016-01-27
Population ecology has classically focused on pairwise species interactions, hindering the description of general patterns and processes of population abundance at large spatial scales. Here we use the metabolic theory of ecology as a framework to formulate and test a model that yields predictions linking population density to the physiological constraints of body size and temperature on individual metabolism, and the ecological constraints of trophic structure and species richness on energy partitioning among species. Our model was tested by applying Bayesian quantile regression to a comprehensive reef-fish community database, from which we extracted density data for 5609 populations spread across 49 sites around the world. Our results indicate that population density declines markedly with increases in community species richness and that, after accounting for richness, energetic constraints are manifested most strongly for the most abundant species, which generally are of small body size and occupy lower trophic groups. Overall, our findings suggest that, at the global scale, factors associated with community species richness are the major drivers of variation in population density. Given that populations of species-rich tropical systems exhibit markedly lower maximum densities, they may be particularly susceptible to stochastic extinction. © 2016 The Author(s).
Energetic and ecological constraints on population density of reef fishes
Barneche, D. R.; Kulbicki, M.; Floeter, S. R.; Friedlander, A. M.; Allen, A. P.
2016-01-01
Population ecology has classically focused on pairwise species interactions, hindering the description of general patterns and processes of population abundance at large spatial scales. Here we use the metabolic theory of ecology as a framework to formulate and test a model that yields predictions linking population density to the physiological constraints of body size and temperature on individual metabolism, and the ecological constraints of trophic structure and species richness on energy partitioning among species. Our model was tested by applying Bayesian quantile regression to a comprehensive reef-fish community database, from which we extracted density data for 5609 populations spread across 49 sites around the world. Our results indicate that population density declines markedly with increases in community species richness and that, after accounting for richness, energetic constraints are manifested most strongly for the most abundant species, which generally are of small body size and occupy lower trophic groups. Overall, our findings suggest that, at the global scale, factors associated with community species richness are the major drivers of variation in population density. Given that populations of species-rich tropical systems exhibit markedly lower maximum densities, they may be particularly susceptible to stochastic extinction. PMID:26791611
NASA Astrophysics Data System (ADS)
Nagar, Lokesh; Dutta, Pankaj; Jain, Karuna
2014-05-01
In the present day business scenario, instant changes in market demand, different source of materials and manufacturing technologies force many companies to change their supply chain planning in order to tackle the real-world uncertainty. The purpose of this paper is to develop a multi-objective two-stage stochastic programming supply chain model that incorporates imprecise production rate and supplier capacity under scenario dependent fuzzy random demand associated with new product supply chains. The objectives are to maximise the supply chain profit, achieve desired service level and minimise financial risk. The proposed model allows simultaneous determination of optimum supply chain design, procurement and production quantities across the different plants, and trade-offs between inventory and transportation modes for both inbound and outbound logistics. Analogous to chance constraints, we have used the possibility measure to quantify the demand uncertainties and the model is solved using fuzzy linear programming approach. An illustration is presented to demonstrate the effectiveness of the proposed model. Sensitivity analysis is performed for maximisation of the supply chain profit with respect to different confidence level of service, risk and possibility measure. It is found that when one considers the service level and risk as robustness measure the variability in profit reduces.
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Holm, Darryl D.
2018-01-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
Stochastic Watershed Models for Risk Based Decision Making
NASA Astrophysics Data System (ADS)
Vogel, R. M.
2017-12-01
Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Holm, Darryl D.
2018-06-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
NASA Technical Reports Server (NTRS)
Kerstman, Eric; Minard, Charles; Saile, Lynn; deCarvalho, Mary Freire; Myers, Jerry; Walton, Marlei; Butler, Douglas; Iyengar, Sriram; Johnson-Throop, Kathy; Baumann, David
2009-01-01
The Integrated Medical Model (IMM) is a decision support tool that is useful to mission planners and medical system designers in assessing risks and designing medical systems for space flight missions. The IMM provides an evidence based approach for optimizing medical resources and minimizing risks within space flight operational constraints. The mathematical relationships among mission and crew profiles, medical condition incidence data, in-flight medical resources, potential crew functional impairments, and clinical end-states are established to determine probable mission outcomes. Stochastic computational methods are used to forecast probability distributions of crew health and medical resource utilization, as well as estimates of medical evacuation and loss of crew life. The IMM has been used in support of the International Space Station (ISS) medical kit redesign, the medical component of the ISS Probabilistic Risk Assessment, and the development of the Constellation Medical Conditions List. The IMM also will be used to refine medical requirements for the Constellation program. The IMM outputs for ISS and Constellation design reference missions will be presented to demonstrate the potential of the IMM in assessing risks, planning missions, and designing medical systems. The implementation of the IMM verification and validation plan will be reviewed. Additional planned capabilities of the IMM, including optimization techniques and the inclusion of a mission timeline, will be discussed. Given the space flight constraints of mass, volume, and crew medical training, the IMM is a valuable risk assessment and decision support tool for medical system design and mission planning.
Stochastic Education in Childhood: Examining the Learning of Teachers and Students
ERIC Educational Resources Information Center
de Souza, Antonio Carlos; Lopes, Celi Espasandin; de Oliveira, Débora
2014-01-01
This paper presents discussions on stochastic education in early childhood, based on two doctoral research projects carried out with groups of preschool teachers from public schools in the Brazilian cities of Suzano and São Paulo who were participating in a continuing education program. The objective is to reflect on the analysis of two didactic…
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
Self-organization, collective decision making and resource exploitation strategies in social insects
NASA Astrophysics Data System (ADS)
Nicolis, S. C.; Dussutour, A.
2008-10-01
Amplifying communications are a ubiquitous characteristic of group-living animals. This work is concerned with their role in the processes of food recruitment and resource exploitation by social insects. The collective choices made by ants faced with different food sources are analyzed using both a mean field description and a stochastic approach. Emphasis is placed on the possibility of optimizing the recruitment and exploitation strategies through an appropriate balance between individual variability, cooperative interactions and environmental constraints.
The role of predictive uncertainty in the operational management of reservoirs
NASA Astrophysics Data System (ADS)
Todini, E.
2014-09-01
The present work deals with the operational management of multi-purpose reservoirs, whose optimisation-based rules are derived, in the planning phase, via deterministic (linear and nonlinear programming, dynamic programming, etc.) or via stochastic (generally stochastic dynamic programming) approaches. In operation, the resulting deterministic or stochastic optimised operating rules are then triggered based on inflow predictions. In order to fully benefit from predictions, one must avoid using them as direct inputs to the reservoirs, but rather assess the "predictive knowledge" in terms of a predictive probability density to be operationally used in the decision making process for the estimation of expected benefits and/or expected losses. Using a theoretical and extremely simplified case, it will be shown why directly using model forecasts instead of the full predictive density leads to less robust reservoir management decisions. Moreover, the effectiveness and the tangible benefits for using the entire predictive probability density instead of the model predicted values will be demonstrated on the basis of the Lake Como management system, operational since 1997, as well as on the basis of a case study on the lake of Aswan.
Yedid, G; Ofria, C A; Lenski, R E
2008-09-01
Re-evolution of complex biological features following the extinction of taxa bearing them remains one of evolution's most interesting phenomena, but is not amenable to study in fossil taxa. We used communities of digital organisms (computer programs that self-replicate, mutate and evolve), subjected to periods of low resource availability, to study the evolution, loss and re-evolution of a complex computational trait, the function EQU (bit-wise logical equals). We focused our analysis on cases where the pre-extinction EQU clade had surviving descendents at the end of the extinction episode. To see if these clades retained the capacity to re-evolve EQU, we seeded one set of multiple subreplicate 'replay' populations using the most abundant survivor of the pre-extinction EQU clade, and another set with the actual end-extinction ancestor of the organism in which EQU re-evolved following the extinction episode. Our results demonstrate that stochastic, historical, genomic and ecological factors can lead to constraints on further adaptation, and facilitate or hinder re-evolution of a complex feature.
Global optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Arora, Jasbir S.
1990-01-01
The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.
Locating PHEV exchange stations in V2G
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Feng; Bent, Russell; Berscheid, Alan
2010-01-01
Plug-in hybrid electric vehicle (PREV) is an environment friendly modem transportation method and has been rapidly penetrate the transportation system. Renewable energy is another contributor to clean power but the associated intermittence increases the uncertainty in power generation. As a foreseen benefit of a vchicle-to-grid (V2G) system, PREV supporting infrastructures like battery exchange stations can provide battery service to PREV customers as well as being plugged into a power grid as energy sources and stabilizer. The locations of exchange stations are important for these two objectives under constraints from both ,transportation system and power grid. To model this location problemmore » and to understand and analyze the benefit of a V2G system, we develop a two-stage stochastic program to optimally locate the stations prior to the realizations of battery demands, loads, and generation capacity of renewable power sources. Based on this model, we use two data sets to construct the V2G systems and test the benefit and the performance of these systems.« less
Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.
2016-11-27
Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.
Programming Probabilistic Structural Analysis for Parallel Processing Computer
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.
1991-01-01
The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.
Simulation-based planning for theater air warfare
NASA Astrophysics Data System (ADS)
Popken, Douglas A.; Cox, Louis A., Jr.
2004-08-01
Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.
Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li
2018-05-01
This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.
Multiscale Cues Drive Collective Cell Migration
NASA Astrophysics Data System (ADS)
Nam, Ki-Hwan; Kim, Peter; Wood, David K.; Kwon, Sunghoon; Provenzano, Paolo P.; Kim, Deok-Ho
2016-07-01
To investigate complex biophysical relationships driving directed cell migration, we developed a biomimetic platform that allows perturbation of microscale geometric constraints with concomitant nanoscale contact guidance architectures. This permits us to elucidate the influence, and parse out the relative contribution, of multiscale features, and define how these physical inputs are jointly processed with oncogenic signaling. We demonstrate that collective cell migration is profoundly enhanced by the addition of contract guidance cues when not otherwise constrained. However, while nanoscale cues promoted migration in all cases, microscale directed migration cues are dominant as the geometric constraint narrows, a behavior that is well explained by stochastic diffusion anisotropy modeling. Further, oncogene activation (i.e. mutant PIK3CA) resulted in profoundly increased migration where extracellular multiscale directed migration cues and intrinsic signaling synergistically conspire to greatly outperform normal cells or any extracellular guidance cues in isolation.
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
Equilibrium Reconstruction on the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samuel A. Lazerson, D. Gates, D. Monticello, H. Neilson, N. Pomphrey, A. Reiman S. Sakakibara, and Y. Suzuki
Equilibrium reconstruction is commonly applied to axisymmetric toroidal devices. Recent advances in computational power and equilibrium codes have allowed for reconstructions of three-dimensional fields in stellarators and heliotrons. We present the first reconstructions of finite beta discharges in the Large Helical Device (LHD). The plasma boundary and magnetic axis are constrained by the pressure profile from Thomson scattering. This results in a calculation of plasma beta without a-priori assumptions of the equipartition of energy between species. Saddle loop arrays place additional constraints on the equilibrium. These reconstruction utilize STELLOPT, which calls VMEC. The VMEC equilibrium code assumes good nested fluxmore » surfaces. Reconstructed magnetic fields are fed into the PIES code which relaxes this constraint allowing for the examination of the effect of islands and stochastic regions on the magnetic measurements.« less
[Stochastic model of infectious diseases transmission].
Ruiz-Ramírez, Juan; Hernández-Rodríguez, Gabriela Eréndira
2009-01-01
Propose a mathematic model that shows how population structure affects the size of infectious disease epidemics. This study was conducted during 2004 at the University of Colima. It used generalized small-world network topology to represent contacts that occurred within and between families. To that end, two programs in MATLAB were conducted to calculate the efficiency of the network. The development of a program in the C programming language was also required, that represents the stochastic susceptible-infectious-removed model, and simultaneous results were obtained for the number of infected people. An increased number of families connected by meeting sites impacted the size of the infectious diseases by roughly 400%. Population structure influences the rapid spread of infectious diseases, reaching epidemic effects.
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemoine, Martin; Martin, Jerome; Yokoyama, Jun'ichi
2009-12-15
We set constraints on moduli cosmology from the production of dark matter - radiation and baryon -radiation isocurvature fluctuations through modulus decay, assuming the modulus remains light during inflation. We find that the moduli problem becomes worse at the perturbative level as a significant part of the parameter space m{sub {sigma}} (modulus mass) - {sigma}{sub inf} (modulus vacuum expectation value at the end of inflation) is constrained by the nonobservation of significant isocurvature fluctuations. We discuss in detail the evolution of the modulus vacuum expectation value and perturbations, in particular, the consequences of Hubble scale corrections to the modulus potential,more » and the stochastic motion of the modulus during inflation. We show, in particular, that a high modulus mass scale m{sub {sigma}} > or approx. 100 TeV, which allows the modulus to evade big bang nucleosynthesis constraints is strongly constrained at the perturbative level. We find that generically, solving the moduli problem requires the inflationary scale to be much smaller than 10{sup 13} GeV.« less
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Nurhayati, R. A.; Wiyono, S. B.; Handajani, S. S.; Martini, T. S.
2017-01-01
In this paper, we develop an integrated inventory model considering the imperfect quality items, inspection error, controllable lead time, and budget capacity constraint. The imperfect items were uniformly distributed and detected on the screening process. However there are two types of possibilities. The first is type I of inspection error (when a non-defective item classified as defective) and the second is type II of inspection error (when a defective item classified as non-defective). The demand during the lead time is unknown, and it follows the normal distribution. The lead time can be controlled by adding the crashing cost. Furthermore, the existence of the budget capacity constraint is caused by the limited purchasing cost. The purposes of this research are: to modify the integrated vendor and buyer inventory model, to establish the optimal solution using Kuhn-Tucker’s conditions, and to apply the models. Based on the result of application and the sensitivity analysis, it can be obtained minimum integrated inventory total cost rather than separated inventory.
Diffusive processes in a stochastic magnetic field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H.; Vlad, M.; Vanden Eijnden, E.
1995-05-01
The statistical representation of a fluctuating (stochastic) magnetic field configuration is studied in detail. The Eulerian correlation functions of the magnetic field are determined, taking into account all geometrical constraints: these objects form a nondiagonal matrix. The Lagrangian correlations, within the reasonable Corrsin approximation, are reduced to a single scalar function, determined by an integral equation. The mean square perpendicular deviation of a geometrical point moving along a perturbed field line is determined by a nonlinear second-order differential equation. The separation of neighboring field lines in a stochastic magnetic field is studied. We find exponentiation lengths of both signs describing,more » in particular, a decay (on the average) of any initial anisotropy. The vanishing sum of these exponentiation lengths ensures the existence of an invariant which was overlooked in previous works. Next, the separation of a particle`s trajectory from the magnetic field line to which it was initially attached is studied by a similar method. Here too an initial phase of exponential separation appears. Assuming the existence of a final diffusive phase, anomalous diffusion coefficients are found for both weakly and strongly collisional limits. The latter is identical to the well known Rechester-Rosenbluth coefficient, which is obtained here by a more quantitative (though not entirely deductive) treatment than in earlier works.« less
Maximum entropy principle for stationary states underpinned by stochastic thermodynamics.
Ford, Ian J
2015-11-01
The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.
NASA Astrophysics Data System (ADS)
Ramos, José A.; Mercère, Guillaume
2016-12-01
In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.
Maxwell's demon and the management of ignorance in stochastic thermodynamics
NASA Astrophysics Data System (ADS)
Ford, Ian J.
2016-07-01
It is nearly 150 years since Maxwell challenged the validity of the second law of thermodynamics by imagining a tiny creature who could sort the molecules of a gas in such a way that would decrease entropy without exerting any work. The demon has been discussed largely using thought experiments, but it has recently become possible to exert control over nanoscale systems, just as Maxwell imagined, and the status of the second law has become a more practical matter, raising the issue of how measurements manage our ignorance in a way that can be exploited. The framework of stochastic thermodynamics extends macroscopic concepts such as heat, work, entropy and irreversibility to small systems and allows us explore the matter. Some arguments against a successful demon imply a second law that can be suspended indefinitely until we dissipate energy in order to remove the records of his operations. In contrast, under stochastic thermodynamics, the demon fails because on average, more work is performed upfront in making a measurement than can be extracted by exploiting the outcome. This requires us to exclude systems and a demon that evolve under what might be termed self-sorting dynamics, and we reflect on the constraints on control that this implies while still working within a thermodynamic framework.
First assembly times and equilibration in stochastic coagulation-fragmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Orsogna, Maria R.; Department of Mathematics, CSUN, Los Angeles, California 91330-8313; Lei, Qi
2015-07-07
We develop a fully stochastic theory for coagulation and fragmentation (CF) in a finite system with a maximum cluster size constraint. The process is modeled using a high-dimensional master equation for the probabilities of cluster configurations. For certain realizations of total mass and maximum cluster sizes, we find exact analytical results for the expected equilibrium cluster distributions. If coagulation is fast relative to fragmentation and if the total system mass is indivisible by the mass of the largest allowed cluster, we find a mean cluster-size distribution that is strikingly broader than that predicted by the corresponding mass-action equations. Combinations ofmore » total mass and maximum cluster size under which equilibration is accelerated, eluding late-stage coarsening, are also delineated. Finally, we compute the mean time it takes particles to first assemble into a maximum-sized cluster. Through careful state-space enumeration, the scaling of mean assembly times is derived for all combinations of total mass and maximum cluster size. We find that CF accelerates assembly relative to monomer kinetic only in special cases. All of our results hold in the infinite system limit and can be only derived from a high-dimensional discrete stochastic model, highlighting how classical mass-action models of self-assembly can fail.« less
Spreading paths in partially observed social networks
NASA Astrophysics Data System (ADS)
Onnela, Jukka-Pekka; Christakis, Nicholas A.
2012-03-01
Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.
Spreading paths in partially observed social networks.
Onnela, Jukka-Pekka; Christakis, Nicholas A
2012-03-01
Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.
Groundwater management under uncertainty using a stochastic multi-cell model
NASA Astrophysics Data System (ADS)
Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.
2017-08-01
The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.
A programing system for research and applications in structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.
1981-01-01
The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of constraints and design variables. Features shown in numerical examples include: variability of structural layout and overall shape geometry, static strength and stiffness constraints, local buckling failure, and vibration constraints.
NASA Astrophysics Data System (ADS)
Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi
2017-01-01
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.
Huang, Wei; Shi, Jun; Yen, R T
2012-12-01
The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.
2017-01-05
module. 15. SUBJECT TERMS Logistics, attrition, discrete event simulation, Simkit, LBC 16. SECURITY CLASSIFICATION OF: Unclassified 17. LIMITATION...stochastics, and discrete event model programmed in Java building largely on the Simkit library. The primary purpose of the LBC model is to support...equations makes them incompatible with the discrete event construct of LBC. Bullard further advances this methodology by developing a stochastic
Toward Control of Universal Scaling in Critical Dynamics
2016-01-27
program that aims to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Uwe Tauber Uwe C. T? uber , Michel Pleimling, Daniel J. Stilwell 611102 c. THIS PAGE The public reporting burden...to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi-component systems, namely
Mathematical Sciences Division 1992 Programs
1992-10-01
statistical theory that underlies modern signal analysis . There is a strong emphasis on stochastic processes and time series , particularly those which...include optimal resource planning and real- time scheduling of stochastic shop-floor processes. Scheduling systems will be developed that can adapt to...make forecasts for the length-of-service time series . Protocol analysis of these sessions will be used to idenify relevant contextual features and to
NASA Astrophysics Data System (ADS)
Neate, Andrew; Truman, Aubrey
2016-05-01
Little is known about dark matter particles save that their most important interactions with ordinary matter are gravitational and that, if they exist, they are stable, slow moving and relatively massive. Based on these assumptions, a semiclassical approximation to the Schrödinger equation under the action of a Coulomb potential should be relevant for modelling their behaviour. We investigate the semiclassical limit of the Schrödinger equation for a particle of mass M under a Coulomb potential in the context of Nelson's stochastic mechanics. This is done using a Freidlin-Wentzell asymptotic series expansion in the parameter ɛ = √{ ħ / M } for the Nelson diffusion. It is shown that for wave functions ψ ˜ exp((R + iS)/ɛ2) where R and S are real valued, the ɛ = 0 behaviour is governed by a constrained Hamiltonian system with Hamiltonian Hr and constraint Hi = 0 where the superscripts r and i denote the real and imaginary parts of the Bohr correspondence limit of the quantum mechanical Hamiltonian, independent of Nelson's ideas. Nelson's stochastic mechanics is restored in dealing with the nodal surface singularities and by computing (correct to first order in ɛ) the relevant diffusion process in terms of Jacobi fields thereby revealing Kepler's laws in a new light. The key here is that the constrained Hamiltonian system has just two solutions corresponding to the forward and backward drifts in Nelson's stochastic mechanics. We discuss the application of this theory to modelling dark matter particles under the influence of a large gravitating point mass.
Jenkins, Dafyd J; Stekel, Dov J
2010-02-01
Gene regulation is one important mechanism in producing observed phenotypes and heterogeneity. Consequently, the study of gene regulatory network (GRN) architecture, function and evolution now forms a major part of modern biology. However, it is impossible to experimentally observe the evolution of GRNs on the timescales on which living species evolve. In silico evolution provides an approach to studying the long-term evolution of GRNs, but many models have either considered network architecture from non-adaptive evolution, or evolution to non-biological objectives. Here, we address a number of important modelling and biological questions about the evolution of GRNs to the realistic goal of biomass production. Can different commonly used simulation paradigms, in particular deterministic and stochastic Boolean networks, with and without basal gene expression, be used to compare adaptive with non-adaptive evolution of GRNs? Are these paradigms together with this goal sufficient to generate a range of solutions? Will the interaction between a biological goal and evolutionary dynamics produce trade-offs between growth and mutational robustness? We show that stochastic basal gene expression forces shrinkage of genomes due to energetic constraints and is a prerequisite for some solutions. In systems that are able to evolve rates of basal expression, two optima, one with and one without basal expression, are observed. Simulation paradigms without basal expression generate bloated networks with non-functional elements. Further, a range of functional solutions was observed under identical conditions only in stochastic networks. Moreover, there are trade-offs between efficiency and yield, indicating an inherent intertwining of fitness and evolutionary dynamics.
Runway Operations Planning: A Two-Stage Solution Methodology
NASA Technical Reports Server (NTRS)
Anagnostakis, Ioannis; Clarke, John-Paul
2003-01-01
The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program. Preliminary results from the algorithm implementation on real-world traffic data are included.
NASA Technical Reports Server (NTRS)
Kerstman, Eric; Saile, Lynn; Freire de Carvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Lopez, Vilma
2011-01-01
Introduction The Integrated Medical Model (IMM) is a decision support tool that is useful to space flight mission managers and medical system designers in assessing risks and optimizing medical systems. The IMM employs an evidence-based, probabilistic risk assessment (PRA) approach within the operational constraints of space flight. Methods Stochastic computational methods are used to forecast probability distributions of medical events, crew health metrics, medical resource utilization, and probability estimates of medical evacuation and loss of crew life. The IMM can also optimize medical kits within the constraints of mass and volume for specified missions. The IMM was used to forecast medical evacuation and loss of crew life probabilities, as well as crew health metrics for a near-earth asteroid (NEA) mission. An optimized medical kit for this mission was proposed based on the IMM simulation. Discussion The IMM can provide information to the space program regarding medical risks, including crew medical impairment, medical evacuation and loss of crew life. This information is valuable to mission managers and the space medicine community in assessing risk and developing mitigation strategies. Exploration missions such as NEA missions will have significant mass and volume constraints applied to the medical system. Appropriate allocation of medical resources will be critical to mission success. The IMM capability of optimizing medical systems based on specific crew and mission profiles will be advantageous to medical system designers. Conclusion The IMM is a decision support tool that can provide estimates of the impact of medical events on human space flight missions, such as crew impairment, evacuation, and loss of crew life. It can be used to support the development of mitigation strategies and to propose optimized medical systems for specified space flight missions. Learning Objectives The audience will learn how an evidence-based decision support tool can be used to help assess risk, develop mitigation strategies, and optimize medical systems for exploration space flight missions.
Decentralized stochastic control
NASA Technical Reports Server (NTRS)
Speyer, J. L.
1980-01-01
Decentralized stochastic control is characterized by being decentralized in that the information to one controller is not the same as information to another controller. The system including the information has a stochastic or uncertain component. This complicates the development of decision rules which one determines under the assumption that the system is deterministic. The system is dynamic which means the present decisions affect future system responses and the information in the system. This circumstance presents a complex problem where tools like dynamic programming are no longer applicable. These difficulties are discussed from an intuitive viewpoint. Particular assumptions are introduced which allow a limited theory which produces mechanizable affine decision rules.
A Discussion of Issues in Integrity Constraint Monitoring
NASA Technical Reports Server (NTRS)
Fernandez, Francisco G.; Gates, Ann Q.; Cooke, Daniel E.
1998-01-01
In the development of large-scale software systems, analysts, designers, and programmers identify properties of data objects in the system. The ability to check those assertions during runtime is desirable as a means of verifying the integrity of the program. Typically, programmers ensure the satisfaction of such properties through the use of some form of manually embedded assertion check. The disadvantage to this approach is that these assertions become entangled within the program code. The goal of the research is to develop an integrity constraint monitoring mechanism whereby a repository of software system properties (called integrity constraints) are automatically inserted into the program by the mechanism to check for incorrect program behaviors. Such a mechanism would overcome many of the deficiencies of manually embedded assertion checks. This paper gives an overview of the preliminary work performed toward this goal. The manual instrumentation of constraint checking on a series of test programs is discussed, This review then is used as the basis for a discussion of issues to be considered in developing an automated integrity constraint monitor.
NASA Technical Reports Server (NTRS)
Mehra, R. K.; Rouhani, R.; Jones, S.; Schick, I.
1980-01-01
A model to assess the value of improved information regarding the inventories, productions, exports, and imports of crop on a worldwide basis is discussed. A previously proposed model is interpreted in a stochastic control setting and the underlying assumptions of the model are revealed. In solving the stochastic optimization problem, the Markov programming approach is much more powerful and exact as compared to the dynamic programming-simulation approach of the original model. The convergence of a dual variable Markov programming algorithm is shown to be fast and efficient. A computer program for the general model of multicountry-multiperiod is developed. As an example, the case of one country-two periods is treated and the results are presented in detail. A comparison with the original model results reveals certain interesting aspects of the algorithms and the dependence of the value of information on the incremental cost function.
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. Copyright © 2014 Elsevier Inc. All rights reserved.
Stochastic Optimization for Unit Commitment-A Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Qipeng P.; Wang, Jianhui; Liu, Andrew L.
2015-07-01
Optimization models have been widely used in the power industry to aid the decision-making process of scheduling and dispatching electric power generation resources, a process known as unit commitment (UC). Since UC's birth, there have been two major waves of revolution on UC research and real life practice. The first wave has made mixed integer programming stand out from the early solution and modeling approaches for deterministic UC, such as priority list, dynamic programming, and Lagrangian relaxation. With the high penetration of renewable energy, increasing deregulation of the electricity industry, and growing demands on system reliability, the next wave ismore » focused on transitioning from traditional deterministic approaches to stochastic optimization for unit commitment. Since the literature has grown rapidly in the past several years, this paper is to review the works that have contributed to the modeling and computational aspects of stochastic optimization (SO) based UC. Relevant lines of future research are also discussed to help transform research advances into real-world applications.« less
A supplier selection and order allocation problem with stochastic demands
NASA Astrophysics Data System (ADS)
Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua
2011-08-01
We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.
NASA Astrophysics Data System (ADS)
Suo, M. Q.; Li, Y. P.; Huang, G. H.
2011-09-01
In this study, an inventory-theory-based interval-parameter two-stage stochastic programming (IB-ITSP) model is proposed through integrating inventory theory into an interval-parameter two-stage stochastic optimization framework. This method can not only address system uncertainties with complex presentation but also reflect transferring batch (the transferring quantity at once) and period (the corresponding cycle time) in decision making problems. A case of water allocation problems in water resources management planning is studied to demonstrate the applicability of this method. Under different flow levels, different transferring measures are generated by this method when the promised water cannot be met. Moreover, interval solutions associated with different transferring costs also have been provided. They can be used for generating decision alternatives and thus help water resources managers to identify desired policies. Compared with the ITSP method, the IB-ITSP model can provide a positive measure for solving water shortage problems and afford useful information for decision makers under uncertainty.
Optimization for routing vehicles of seafood product transportation
NASA Astrophysics Data System (ADS)
Soenandi, I. A.; Juan, Y.; Budi, M.
2017-12-01
Recently, increasing usage of marine products is creating new challenges for businesses of marine products in terms of transportation that used to carry the marine products like seafood to the main warehouse. This can be a problem if the carrier fleet is limited, and there are time constraints in terms of the freshness of the marine product. There are many ways to solve this problem, including the optimization of routing vehicles. In this study, this strategy is to implement in the marine product business in Indonesia with such an expected arrangement of the company to optimize routing problem in transportation with time and capacity windows. Until now, the company has not used the scientific method to manage the routing of their vehicle from warehouse to the location of marine products source. This study will solve a stochastic Vehicle Routing Problems (VRP) with time and capacity windows by using the comparison of six methods and looking the best results for the optimization, in this situation the company could choose the best method, in accordance with the existing condition. In this research, we compared the optimization with another method such as branch and bound, dynamic programming and Ant Colony Optimization (ACO). Finally, we get the best result after running ACO algorithm with existing travel time data. With ACO algorithm was able to reduce vehicle travel time by 3189.65 minutes, which is about 23% less than existing and based on consideration of the constraints of time within 2 days (including rest time for the driver) using 28 tons capacity of truck and the companies need two units of vehicles for transportation.
NASA Astrophysics Data System (ADS)
Hejazi, Mohamad I.; Cai, Ximing
2011-06-01
In this paper, we promote a novel approach to develop reservoir operation routines by learning from historical hydrologic information and reservoir operations. The proposed framework involves a knowledge discovery step to learn the real drivers of reservoir decision making and to subsequently build a more realistic (enhanced) model formulation using stochastic dynamic programming (SDP). The enhanced SDP model is compared to two classic SDP formulations using Lake Shelbyville, a reservoir on the Kaskaskia River in Illinois, as a case study. From a data mining procedure with monthly data, the past month's inflow ( Qt-1 ), current month's inflow ( Qt), past month's release ( Rt-1 ), and past month's Palmer drought severity index ( PDSIt-1 ) are identified as important state variables in the enhanced SDP model for Shelbyville Reservoir. When compared to a weekly enhanced SDP model of the same case study, a different set of state variables and constraints are extracted. Thus different time scales for the model require different information. We demonstrate that adding additional state variables improves the solution by shifting the Pareto front as expected while using new constraints and the correct objective function can significantly reduce the difference between derived policies and historical practices. The study indicates that the monthly enhanced SDP model resembles historical records more closely and yet provides lower expected average annual costs than either of the two classic formulations (25.4% and 4.5% reductions, respectively). The weekly enhanced SDP model is compared to the monthly enhanced SDP, and it shows that acquiring the correct temporal scale is crucial to model reservoir operation for particular objectives.
NASA Astrophysics Data System (ADS)
Panda, Satyasen
2018-05-01
This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.
Quantifying parameter uncertainty in stochastic models using the Box Cox transformation
NASA Astrophysics Data System (ADS)
Thyer, Mark; Kuczera, George; Wang, Q. J.
2002-08-01
The Box-Cox transformation is widely used to transform hydrological data to make it approximately Gaussian. Bayesian evaluation of parameter uncertainty in stochastic models using the Box-Cox transformation is hindered by the fact that there is no analytical solution for the posterior distribution. However, the Markov chain Monte Carlo method known as the Metropolis algorithm can be used to simulate the posterior distribution. This method properly accounts for the nonnegativity constraint implicit in the Box-Cox transformation. Nonetheless, a case study using the AR(1) model uncovered a practical problem with the implementation of the Metropolis algorithm. The use of a multivariate Gaussian jump distribution resulted in unacceptable convergence behaviour. This was rectified by developing suitable parameter transformations for the mean and variance of the AR(1) process to remove the strong nonlinear dependencies with the Box-Cox transformation parameter. Applying this methodology to the Sydney annual rainfall data and the Burdekin River annual runoff data illustrates the efficacy of these parameter transformations and demonstrate the value of quantifying parameter uncertainty.
NASA Astrophysics Data System (ADS)
Firpo, M.-C.; Constantinescu, D.
2011-03-01
The issue of magnetic confinement in magnetic fusion devices is addressed within a purely magnetic approach. Using some Hamiltonian models for the magnetic field lines, the dual impact of low magnetic shear is shown in a unified way. Away from resonances, it induces a drastic enhancement of magnetic confinement that favors robust internal transport barriers (ITBs) and stochastic transport reduction. When low shear occurs for values of the winding of the magnetic field lines close to low-order rationals, the amplitude thresholds of the resonant modes that break internal transport barriers by allowing a radial stochastic transport of the magnetic field lines may be quite low. The approach can be applied to assess the robustness versus magnetic perturbations of general (almost) integrable magnetic steady states, including nonaxisymmetric ones such as the important single-helicity steady states. This analysis puts a constraint on the tolerable mode amplitudes compatible with ITBs and may be proposed as a possible explanation of diverse experimental and numerical signatures of their collapses.
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
NASA Astrophysics Data System (ADS)
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
New window into stochastic gravitational wave background.
Rotti, Aditya; Souradeep, Tarun
2012-11-30
A stochastic gravitational wave background (SGWB) would gravitationally lens the cosmic microwave background (CMB) photons. We correct the results provided in existing literature for modifications to the CMB polarization power spectra due to lensing by gravitational waves. Weak lensing by gravitational waves distorts all four CMB power spectra; however, its effect is most striking in the mixing of power between the E mode and B mode of CMB polarization. This suggests the possibility of using measurements of the CMB angular power spectra to constrain the energy density (Ω(GW)) of the SGWB. Using current data sets (QUAD, WMAP, and ACT), we find that the most stringent constraints on the present Ω(GW) come from measurements of the angular power spectra of CMB temperature anisotropies. In the near future, more stringent bounds on Ω(GW) can be expected with improved upper limits on the B modes of CMB polarization. Any detection of B modes of CMB polarization above the expected signal from large scale structure lensing could be a signal for a SGWB.
Role of sufficient statistics in stochastic thermodynamics and its implication to sensory adaptation
NASA Astrophysics Data System (ADS)
Matsumoto, Takumi; Sagawa, Takahiro
2018-04-01
A sufficient statistic is a significant concept in statistics, which means a probability variable that has sufficient information required for an inference task. We investigate the roles of sufficient statistics and related quantities in stochastic thermodynamics. Specifically, we prove that for general continuous-time bipartite networks, the existence of a sufficient statistic implies that an informational quantity called the sensory capacity takes the maximum. Since the maximal sensory capacity imposes a constraint that the energetic efficiency cannot exceed one-half, our result implies that the existence of a sufficient statistic is inevitably accompanied by energetic dissipation. We also show that, in a particular parameter region of linear Langevin systems there exists the optimal noise intensity at which the sensory capacity, the information-thermodynamic efficiency, and the total entropy production are optimized at the same time. We apply our general result to a model of sensory adaptation of E. coli and find that the sensory capacity is nearly maximal with experimentally realistic parameters.
Constraints on Cosmic Strings from the LIGO-Virgo Gravitational-Wave Detectors
NASA Astrophysics Data System (ADS)
Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, R. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Ast, S.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barker, D.; Barnum, S. H.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Bergmann, G.; Berliner, J. M.; Bersanetti, D.; Bertolini, A.; Bessis, D.; Betzwieser, J.; Beyersdorf, P. T.; Bhadbhade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bowers, J.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brannen, C. A.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Colombini, M.; Constancio, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Canton, T. Dal; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Deleeuw, E.; Deléglise, S.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Dmitry, K.; Donovan, F.; Dooley, K. L.; Doravari, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edwards, M.; Effler, A.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Endrőczi, G.; Essick, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R.; Flaminio, R.; Foley, E.; Foley, S.; Forsi, E.; Fotopoulos, N.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B.; Hall, E.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Heefner, J.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Horrom, T.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Hua, Z.; Huang, V.; Huerta, E. A.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Iafrate, J.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Iyer, B. R.; Izumi, K.; Jacobson, M.; James, E.; Jang, H.; Jang, Y. J.; Jaranowski, P.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Koehlenbeck, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kremin, A.; Kringel, V.; Królak, A.; Kucharczyk, C.; Kudla, S.; Kuehn, G.; Kumar, A.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Landry, M.; Lantz, B.; Larson, S.; Lasky, P. D.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lebigot, E. O.; Lee, C.-H.; Lee, H. K.; Lee, H. M.; Lee, J.; Lee, J.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levine, B.; Lewis, J. B.; Lhuillier, V.; Li, T. G. F.; Lin, A. C.; Littenberg, T. B.; Litvine, V.; Liu, F.; Liu, H.; Liu, Y.; Liu, Z.; Lloyd, D.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Loew, K.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Luan, J.; Lubinski, M. J.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Manca, G. M.; Mandel, I.; Mandic, V.; Mangano, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Martinelli, L.; Martynov, D.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; May, G.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meier, T.; Melatos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Mikhailov, E. E.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Mokler, F.; Moraru, D.; Moreno, G.; Morgado, N.; Mori, T.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nanda Kumar, D.; Nardecchia, I.; Nash, T.; Naticchioni, L.; Nayak, R.; Necula, V.; Nelemans, G.; Neri, I.; Neri, M.; Newton, G.; Nguyen, T.; Nishida, E.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oppermann, P.; O'Reilly, B.; Ortega Larcher, W.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Ou, J.; Overmier, H.; Owen, B. J.; Padilla, C.; Pai, A.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Paris, H.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Peiris, P.; Penn, S.; Perreca, A.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pinard, L.; Pindor, B.; Pinto, I. M.; Pitkin, M.; Poeld, J.; Poggiani, R.; Poole, V.; Poux, C.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quintero, E.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rajalakshmi, G.; Rakhmanov, M.; Ramet, C.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Robertson, N. A.; Robinet, F.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Roever, C.; Rolland, L.; Rollins, J. G.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sanders, J.; Sannibale, V.; Santiago-Prieto, I.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schuette, D.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shaddock, D.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Soden, K.; Son, E. J.; Sorazu, B.; Souradeep, T.; Sperandio, L.; Staley, A.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stevens, D.; Stochino, A.; Stone, R.; Strain, K. A.; Straniero, N.; Strigin, S.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Talukder, D.; Tang, L.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Unnikrishnan, C. S.; Vahlbruch, H.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Verma, S.; Vetrano, F.; Viceré, A.; Vincent-Finley, R.; Vinet, J.-Y.; Vitale, S.; Vlcek, B.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vrinceanu, D.; Vyachanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Walker, M.; Wallace, L.; Wan, Y.; Wang, J.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wibowo, S.; Wiesner, K.; Wilkinson, C.; Williams, L.; Williams, R.; Williams, T.; Willis, J. L.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yum, H.; Yvert, M.; ZadroŻny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zhu, H.; Zhu, X. J.; Zotov, N.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2014-04-01
Cosmic strings can give rise to a large variety of interesting astrophysical phenomena. Among them, powerful bursts of gravitational waves (GWs) produced by cusps are a promising observational signature. In this Letter we present a search for GWs from cosmic string cusps in data collected by the LIGO and Virgo gravitational wave detectors between 2005 and 2010, with over 625 days of live time. We find no evidence of GW signals from cosmic strings. From this result, we derive new constraints on cosmic string parameters, which complement and improve existing limits from previous searches for a stochastic background of GWs from cosmic microwave background measurements and pulsar timing data. In particular, if the size of loops is given by the gravitational backreaction scale, we place upper limits on the string tension Gμ below 10-8 in some regions of the cosmic string parameter space.
Constraints on Cosmic Strings from the LIGO-Virgo Gravitational-Wave Detectors
NASA Technical Reports Server (NTRS)
Aasi, J.; Abadie, J.; Abbott, B.P.; Abbott, R.; Abbott, T.; Abernathy, M.R.; Accadia, T.; Adams, C.; Adams, T.; Adhikari, R.X.;
2014-01-01
Cosmic strings can give rise to a large variety of interesting astrophysical phenomena. Among them, powerful bursts of gravitational waves (GWs) produced by cusps are a promising observational signature. In this Letter we present a search for GWs from cosmic string cusps in data collected by the LIGO and Virgo gravitational wave detectors between 2005 and 2010, with over 625 days of live time. We find no evidence of GW signals from cosmic strings. From this result, we derive new constraints on cosmic string parameters, which complement and improve existing limits from previous searches for a stochastic background of GWs from cosmic microwave background measurements and pulsar timing data. In particular, if the size of loops is given by the gravitational backreaction scale, we place upper limits on the string tension (Newton's Constant x mass per unit length) below 10(exp -8) in some regions of the cosmic string parameter space.
Reliability evaluation of a multistate network subject to time constraint under routing policy
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei
2013-08-01
A multistate network is a stochastic network composed of multistate arcs in which each arc has several possible capacities and may fail due to failure, maintenance, etc. The quality of a multistate network depends on how to meet the customer's requirements and how to provide the service in time. The system reliability, the probability that a given amount of data can be transmitted through a pair of minimal paths (MPs) simultaneously under the time constraint, is a proper index to evaluate the quality of a multistate network. An efficient solution procedure is first proposed to calculate it. In order to further enhance the system reliability, the network administrator decides the routing policy in advance to indicate the first and the second priority pairs of MPs. The second priority pair of MPs takes charge of the transmission duty if the first fails. The system reliability under the routing policy can be subsequently evaluated.
Influence of radiation on metastability-based TRNG
NASA Astrophysics Data System (ADS)
Wieczorek, Piotr Z.; Wieczorek, Zbigniew
2017-08-01
This paper presents a True Random Number Generator (TRNG) based on Flip-Flops with violated timing constraints. The proposed circuit has been implemented in a Xilinx Spartan 6 device. The TRNG circuit utilizes the metastability phenomenon as a source of randomness. Therefore, in the paper the influence of timing constraints on the flip-flop metastability proximity is discussed. The metastable range of operation enhances the noise influence on a Flip-Flop behavior. Therefore, the influence of an external stochastic source on the flip-flop operation is also investigated. For this purpose a radioactive source of radiation was used. According to the results shown in the paper the radiation increases the unpredictability of the metastable process of flip-flops operating as the randomness source in the TRNG. The statistical properties of TRNG operating in an increased radiation conditions were verified with the NIST battery of statistical tests.
Constraints on cosmic strings from the LIGO-Virgo gravitational-wave detectors.
Aasi, J; Abadie, J; Abbott, B P; Abbott, R; Abbott, T; Abernathy, M R; Accadia, T; Acernese, F; Adams, C; Adams, T; Adhikari, R X; Affeldt, C; Agathos, M; Aggarwal, N; Aguiar, O D; Ajith, P; Allen, B; Allocca, A; Amador Ceron, E; Amariutei, D; Anderson, R A; Anderson, S B; Anderson, W G; Arai, K; Araya, M C; Arceneaux, C; Areeda, J; Ast, S; Aston, S M; Astone, P; Aufmuth, P; Aulbert, C; Austin, L; Aylott, B E; Babak, S; Baker, P T; Ballardin, G; Ballmer, S W; Barayoga, J C; Barker, D; Barnum, S H; Barone, F; Barr, B; Barsotti, L; Barsuglia, M; Barton, M A; Bartos, I; Bassiri, R; Basti, A; Batch, J; Bauchrowitz, J; Bauer, Th S; Bebronne, M; Behnke, B; Bejger, M; Beker, M G; Bell, A S; Bell, C; Belopolski, I; Bergmann, G; Berliner, J M; Bersanetti, D; Bertolini, A; Bessis, D; Betzwieser, J; Beyersdorf, P T; Bhadbhade, T; Bilenko, I A; Billingsley, G; Birch, J; Bitossi, M; Bizouard, M A; Black, E; Blackburn, J K; Blackburn, L; Blair, D; Blom, M; Bock, O; Bodiya, T P; Boer, M; Bogan, C; Bond, C; Bondu, F; Bonelli, L; Bonnand, R; Bork, R; Born, M; Boschi, V; Bose, S; Bosi, L; Bowers, J; Bradaschia, C; Brady, P R; Braginsky, V B; Branchesi, M; Brannen, C A; Brau, J E; Breyer, J; Briant, T; Bridges, D O; Brillet, A; Brinkmann, M; Brisson, V; Britzger, M; Brooks, A F; Brown, D A; Brown, D D; Brückner, F; Bulik, T; Bulten, H J; Buonanno, A; Buskulic, D; Buy, C; Byer, R L; Cadonati, L; Cagnoli, G; Calderón Bustillo, J; Calloni, E; Camp, J B; Campsie, P; Cannon, K C; Canuel, B; Cao, J; Capano, C D; Carbognani, F; Carbone, L; Caride, S; Castiglia, A; Caudill, S; Cavaglià, M; Cavalier, F; Cavalieri, R; Cella, G; Cepeda, C; Cesarini, E; Chakraborty, R; Chalermsongsak, T; Chao, S; Charlton, P; Chassande-Mottin, E; Chen, X; Chen, Y; Chincarini, A; Chiummo, A; Cho, H S; Chow, J; Christensen, N; Chu, Q; Chua, S S Y; Chung, S; Ciani, G; Clara, F; Clark, D E; Clark, J A; Cleva, F; Coccia, E; Cohadon, P-F; Colla, A; Colombini, M; Constancio, M; Conte, A; Conte, R; Cook, D; Corbitt, T R; Cordier, M; Cornish, N; Corsi, A; Costa, C A; Coughlin, M W; Coulon, J-P; Countryman, S; Couvares, P; Coward, D M; Cowart, M; Coyne, D C; Craig, K; Creighton, J D E; Creighton, T D; Crowder, S G; Cumming, A; Cunningham, L; Cuoco, E; Dahl, K; Dal Canton, T; Damjanic, M; Danilishin, S L; D'Antonio, S; Danzmann, K; Dattilo, V; Daudert, B; Daveloza, H; Davier, M; Davies, G S; Daw, E J; Day, R; Dayanga, T; De Rosa, R; Debreczeni, G; Degallaix, J; Del Pozzo, W; Deleeuw, E; Deléglise, S; Denker, T; Dent, T; Dereli, H; Dergachev, V; DeRosa, R; DeSalvo, R; Dhurandhar, S; Di Fiore, L; Di Lieto, A; Di Palma, I; Di Virgilio, A; Díaz, M; Dietz, A; Dmitry, K; Donovan, F; Dooley, K L; Doravari, S; Drago, M; Drever, R W P; Driggers, J C; Du, Z; Dumas, J-C; Dwyer, S; Eberle, T; Edwards, M; Effler, A; Ehrens, P; Eichholz, J; Eikenberry, S S; Endrőczi, G; Essick, R; Etzel, T; Evans, K; Evans, M; Evans, T; Factourovich, M; Fafone, V; Fairhurst, S; Fang, Q; Farinon, S; Farr, B; Farr, W; Favata, M; Fazi, D; Fehrmann, H; Feldbaum, D; Ferrante, I; Ferrini, F; Fidecaro, F; Finn, L S; Fiori, I; Fisher, R; Flaminio, R; Foley, E; Foley, S; Forsi, E; Fotopoulos, N; Fournier, J-D; Franco, S; Frasca, S; Frasconi, F; Frede, M; Frei, M; Frei, Z; Freise, A; Frey, R; Fricke, T T; Fritschel, P; Frolov, V V; Fujimoto, M-K; Fulda, P; Fyffe, M; Gair, J; Gammaitoni, L; Garcia, J; Garufi, F; Gehrels, N; Gemme, G; Genin, E; Gennai, A; Gergely, L; Ghosh, S; Giaime, J A; Giampanis, S; Giardina, K D; Giazotto, A; Gil-Casanova, S; Gill, C; Gleason, J; Goetz, E; Goetz, R; Gondan, L; González, G; Gordon, N; Gorodetsky, M L; Gossan, S; Goßler, S; Gouaty, R; Graef, C; Graff, P B; Granata, M; Grant, A; Gras, S; Gray, C; Greenhalgh, R J S; Gretarsson, A M; Griffo, C; Groot, P; Grote, H; Grover, K; Grunewald, S; Guidi, G M; Guido, C; Gushwa, K E; Gustafson, E K; Gustafson, R; Hall, B; Hall, E; Hammer, D; Hammond, G; Hanke, M; Hanks, J; Hanna, C; Hanson, J; Harms, J; Harry, G M; Harry, I W; Harstad, E D; Hartman, M T; Haughian, K; Hayama, K; Heefner, J; Heidmann, A; Heintze, M; Heitmann, H; Hello, P; Hemming, G; Hendry, M; Heng, I S; Heptonstall, A W; Heurs, M; Hild, S; Hoak, D; Hodge, K A; Holt, K; Holtrop, M; Hong, T; Hooper, S; Horrom, T; Hosken, D J; Hough, J; Howell, E J; Hu, Y; Hua, Z; Huang, V; Huerta, E A; Hughey, B; Husa, S; Huttner, S H; Huynh, M; Huynh-Dinh, T; Iafrate, J; Ingram, D R; Inta, R; Isogai, T; Ivanov, A; Iyer, B R; Izumi, K; Jacobson, M; James, E; Jang, H; Jang, Y J; Jaranowski, P; Jiménez-Forteza, F; Johnson, W W; Jones, D; Jones, D I; Jones, R; Jonker, R J G; Ju, L; K, Haris; Kalmus, P; Kalogera, V; Kandhasamy, S; Kang, G; Kanner, J B; Kasprzack, M; Kasturi, R; Katsavounidis, E; Katzman, W; Kaufer, H; Kaufman, K; Kawabe, K; Kawamura, S; Kawazoe, F; Kéfélian, F; Keitel, D; Kelley, D B; Kells, W; Keppel, D G; Khalaidovski, A; Khalili, F Y; Khazanov, E A; Kim, B K; Kim, C; Kim, K; Kim, N; Kim, W; Kim, Y-M; King, E J; King, P J; Kinzel, D L; Kissel, J S; Klimenko, S; Kline, J; Koehlenbeck, S; Kokeyama, K; Kondrashov, V; Koranda, S; Korth, W Z; Kowalska, I; Kozak, D; Kremin, A; Kringel, V; Królak, A; Kucharczyk, C; Kudla, S; Kuehn, G; Kumar, A; Kumar, P; Kumar, R; Kurdyumov, R; Kwee, P; Landry, M; Lantz, B; Larson, S; Lasky, P D; Lawrie, C; Lazzarini, A; Le Roux, A; Leaci, P; Lebigot, E O; Lee, C-H; Lee, H K; Lee, H M; Lee, J; Lee, J; Leonardi, M; Leong, J R; Leroy, N; Letendre, N; Levine, B; Lewis, J B; Lhuillier, V; Li, T G F; Lin, A C; Littenberg, T B; Litvine, V; Liu, F; Liu, H; Liu, Y; Liu, Z; Lloyd, D; Lockerbie, N A; Lockett, V; Lodhia, D; Loew, K; Logue, J; Lombardi, A L; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lough, J; Luan, J; Lubinski, M J; Lück, H; Lundgren, A P; Macarthur, J; Macdonald, E; Machenschalk, B; MacInnis, M; Macleod, D M; Magana-Sandoval, F; Mageswaran, M; Mailand, K; Majorana, E; Maksimovic, I; Malvezzi, V; Man, N; Manca, G M; Mandel, I; Mandic, V; Mangano, V; Mantovani, M; Marchesoni, F; Marion, F; Márka, S; Márka, Z; Markosyan, A; Maros, E; Marque, J; Martelli, F; Martin, I W; Martin, R M; Martinelli, L; Martynov, D; Marx, J N; Mason, K; Masserot, A; Massinger, T J; Matichard, F; Matone, L; Matzner, R A; Mavalvala, N; May, G; Mazumder, N; Mazzolo, G; McCarthy, R; McClelland, D E; McGuire, S C; McIntyre, G; McIver, J; Meacher, D; Meadors, G D; Mehmet, M; Meidam, J; Meier, T; Melatos, A; Mendell, G; Mercer, R A; Meshkov, S; Messenger, C; Meyer, M S; Miao, H; Michel, C; Mikhailov, E E; Milano, L; Miller, J; Minenkov, Y; Mingarelli, C M F; Mitra, S; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Moe, B; Mohan, M; Mohapatra, S R P; Mokler, F; Moraru, D; Moreno, G; Morgado, N; Mori, T; Morriss, S R; Mossavi, K; Mours, B; Mow-Lowry, C M; Mueller, C L; Mueller, G; Mukherjee, S; Mullavey, A; Munch, J; Murphy, D; Murray, P G; Mytidis, A; Nagy, M F; Nanda Kumar, D; Nardecchia, I; Nash, T; Naticchioni, L; Nayak, R; Necula, V; Nelemans, G; Neri, I; Neri, M; Newton, G; Nguyen, T; Nishida, E; Nishizawa, A; Nitz, A; Nocera, F; Nolting, D; Normandin, M E; Nuttall, L K; Ochsner, E; O'Dell, J; Oelker, E; Ogin, G H; Oh, J J; Oh, S H; Ohme, F; Oppermann, P; O'Reilly, B; Ortega Larcher, W; O'Shaughnessy, R; Osthelder, C; Ott, C D; Ottaway, D J; Ottens, R S; Ou, J; Overmier, H; Owen, B J; Padilla, C; Pai, A; Palomba, C; Pan, Y; Pankow, C; Paoletti, F; Paoletti, R; Papa, M A; Paris, H; Pasqualetti, A; Passaquieti, R; Passuello, D; Pedraza, M; Peiris, P; Penn, S; Perreca, A; Phelps, M; Pichot, M; Pickenpack, M; Piergiovanni, F; Pierro, V; Pinard, L; Pindor, B; Pinto, I M; Pitkin, M; Poeld, J; Poggiani, R; Poole, V; Poux, C; Predoi, V; Prestegard, T; Price, L R; Prijatelj, M; Principe, M; Privitera, S; Prix, R; Prodi, G A; Prokhorov, L; Puncken, O; Punturo, M; Puppo, P; Quetschke, V; Quintero, E; Quitzow-James, R; Raab, F J; Rabeling, D S; Rácz, I; Radkins, H; Raffai, P; Raja, S; Rajalakshmi, G; Rakhmanov, M; Ramet, C; Rapagnani, P; Raymond, V; Re, V; Reed, C M; Reed, T; Regimbau, T; Reid, S; Reitze, D H; Ricci, F; Riesen, R; Riles, K; Robertson, N A; Robinet, F; Rocchi, A; Roddy, S; Rodriguez, C; Rodruck, M; Roever, C; Rolland, L; Rollins, J G; Romano, R; Romanov, G; Romie, J H; Rosińska, D; Rowan, S; Rüdiger, A; Ruggi, P; Ryan, K; Salemi, F; Sammut, L; Sandberg, V; Sanders, J; Sannibale, V; Santiago-Prieto, I; Saracco, E; Sassolas, B; Sathyaprakash, B S; Saulson, P R; Savage, R; Schilling, R; Schnabel, R; Schofield, R M S; Schreiber, E; Schuette, D; Schulz, B; Schutz, B F; Schwinberg, P; Scott, J; Scott, S M; Seifert, F; Sellers, D; Sengupta, A S; Sentenac, D; Sergeev, A; Shaddock, D; Shah, S; Shahriar, M S; Shaltev, M; Shapiro, B; Shawhan, P; Shoemaker, D H; Sidery, T L; Siellez, K; Siemens, X; Sigg, D; Simakov, D; Singer, A; Singer, L; Sintes, A M; Skelton, G R; Slagmolen, B J J; Slutsky, J; Smith, J R; Smith, M R; Smith, R J E; Smith-Lefebvre, N D; Soden, K; Son, E J; Sorazu, B; Souradeep, T; Sperandio, L; Staley, A; Steinert, E; Steinlechner, J; Steinlechner, S; Steplewski, S; Stevens, D; Stochino, A; Stone, R; Strain, K A; Straniero, N; Strigin, S; Stroeer, A S; Sturani, R; Stuver, A L; Summerscales, T Z; Susmithan, S; Sutton, P J; Swinkels, B; Szeifert, G; Tacca, M; Talukder, D; Tang, L; Tanner, D B; Tarabrin, S P; Taylor, R; ter Braack, A P M; Thirugnanasambandam, M P; Thomas, M; Thomas, P; Thorne, K A; Thorne, K S; Thrane, E; Tiwari, V; Tokmakov, K V; Tomlinson, C; Toncelli, A; Tonelli, M; Torre, O; Torres, C V; Torrie, C I; Travasso, F; Traylor, G; Tse, M; Ugolini, D; Unnikrishnan, C S; Vahlbruch, H; Vajente, G; Vallisneri, M; van den Brand, J F J; Van Den Broeck, C; van der Putten, S; van der Sluys, M V; van Heijningen, J; van Veggel, A A; Vass, S; Vasúth, M; Vaulin, R; Vecchio, A; Vedovato, G; Veitch, J; Veitch, P J; Venkateswara, K; Verkindt, D; Verma, S; Vetrano, F; Viceré, A; Vincent-Finley, R; Vinet, J-Y; Vitale, S; Vlcek, B; Vo, T; Vocca, H; Vorvick, C; Vousden, W D; Vrinceanu, D; Vyachanin, S P; Wade, A; Wade, L; Wade, M; Waldman, S J; Walker, M; Wallace, L; Wan, Y; Wang, J; Wang, M; Wang, X; Wanner, A; Ward, R L; Was, M; Weaver, B; Wei, L-W; Weinert, M; Weinstein, A J; Weiss, R; Welborn, T; Wen, L; Wessels, P; West, M; Westphal, T; Wette, K; Whelan, J T; Whitcomb, S E; White, D J; Whiting, B F; Wibowo, S; Wiesner, K; Wilkinson, C; Williams, L; Williams, R; Williams, T; Willis, J L; Willke, B; Wimmer, M; Winkelmann, L; Winkler, W; Wipf, C C; Wittel, H; Woan, G; Worden, J; Yablon, J; Yakushin, I; Yamamoto, H; Yancey, C C; Yang, H; Yeaton-Massey, D; Yoshida, S; Yum, H; Yvert, M; Zadrożny, A; Zanolin, M; Zendri, J-P; Zhang, F; Zhang, L; Zhao, C; Zhu, H; Zhu, X J; Zotov, N; Zucker, M E; Zweizig, J
2014-04-04
Cosmic strings can give rise to a large variety of interesting astrophysical phenomena. Among them, powerful bursts of gravitational waves (GWs) produced by cusps are a promising observational signature. In this Letter we present a search for GWs from cosmic string cusps in data collected by the LIGO and Virgo gravitational wave detectors between 2005 and 2010, with over 625 days of live time. We find no evidence of GW signals from cosmic strings. From this result, we derive new constraints on cosmic string parameters, which complement and improve existing limits from previous searches for a stochastic background of GWs from cosmic microwave background measurements and pulsar timing data. In particular, if the size of loops is given by the gravitational backreaction scale, we place upper limits on the string tension Gμ below 10(-8) in some regions of the cosmic string parameter space.
Nonparametric instrumental regression with non-convex constraints
NASA Astrophysics Data System (ADS)
Grasmair, M.; Scherzer, O.; Vanhems, A.
2013-03-01
This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.
NASA Astrophysics Data System (ADS)
Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.
2015-03-01
We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.
Relevance of quantum mechanics on some aspects of ion channel function
Roy, Sisir
2010-01-01
Mathematical modeling of ionic diffusion along K ion channels indicates that such diffusion is oscillatory, at the weak non-Markovian limit. This finding leads us to derive a Schrödinger–Langevin equation for this kind of system within the framework of stochastic quantization. The Planck’s constant is shown to be relevant to the Lagrangian action at the level of a single ion channel. This sheds new light on the issue of applicability of quantum formalism to ion channel dynamics and to the physical constraints of the selectivity filter. PMID:19520314
Additive manufacturing: Toward holistic design
Jared, Bradley H.; Aguilo, Miguel A.; Beghini, Lauren L.; ...
2017-03-18
Here, additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes.
Learning process mapping heuristics under stochastic sampling overheads
NASA Technical Reports Server (NTRS)
Ieumwananonthachai, Arthur; Wah, Benjamin W.
1991-01-01
A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.
Dwarf galaxies: a lab to investigate the neutron capture elements production
NASA Astrophysics Data System (ADS)
Cescutti, Gabriele
2018-06-01
In this contribution, I focus on the neutron capture elements observed in the spectra of old halo and ultra faint galaxies stars. Adopting a stochastic chemical evolution model and the Galactic halo as a benchmark, I present new constraints on the rate and time scales of r-process events, based on the discovery of the r-process rich stars in the ultra faint galaxy Reticulum 2. I also show that an s-process activated by rotation in massive stars can play an important role in the production of heavy elements.
Taylor, Stephen R; Simon, Joseph; Sampson, Laura
2017-05-05
We introduce a technique for gravitational-wave analysis, where Gaussian process regression is used to emulate the strain spectrum of a stochastic background by training on population-synthesis simulations. This leads to direct Bayesian inference on astrophysical parameters. For pulsar timing arrays specifically, we interpolate over the parameter space of supermassive black-hole binary environments, including three-body stellar scattering, and evolving orbital eccentricity. We illustrate our approach on mock data, and assess the prospects for inference with data similar to the NANOGrav 9-yr data release.
1999-11-26
basic goal of the analysis . In other respects, however, the two approaches differ. Harper and Labianca began by modeling the input stochastic processes...contribution. To facilitate the analysis , however, he placed the receivers at a common depth and was, thus, unable to examine the vertical aspects of...v.p-ikovtot’ILW ... ft a FodA H+TIWAI),** *. «) = -^rf"« x { *+*wa«,> ... , % ;/<„’, • (»D 4-6.5 Bragg-Only Constraint For v < 1 — U
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jared, Bradley H.; Aguilo, Miguel A.; Beghini, Lauren L.
Here, additive manufacturing offers unprecedented opportunities to design complex structures optimized for performance envelopes inaccessible under conventional manufacturing constraints. Additive processes also promote realization of engineered materials with microstructures and properties that are impossible via traditional synthesis techniques. Enthused by these capabilities, optimization design tools have experienced a recent revival. The current capabilities of additive processes and optimization tools are summarized briefly, while an emerging opportunity is discussed to achieve a holistic design paradigm whereby computational tools are integrated with stochastic process and material awareness to enable the concurrent optimization of design topologies, material constructs and fabrication processes.
Multistage Stochastic Programming and its Applications in Energy Systems Modeling and Optimization
NASA Astrophysics Data System (ADS)
Golari, Mehdi
Electric energy constitutes one of the most crucial elements to almost every aspect of life of people. The modern electric power systems face several challenges such as efficiency, economics, sustainability, and reliability. Increase in electrical energy demand, distributed generations, integration of uncertain renewable energy resources, and demand side management are among the main underlying reasons of such growing complexity. Additionally, the elements of power systems are often vulnerable to failures because of many reasons, such as system limits, weak conditions, unexpected events, hidden failures, human errors, terrorist attacks, and natural disasters. One common factor complicating the operation of electrical power systems is the underlying uncertainties from the demands, supplies and failures of system components. Stochastic programming provides a mathematical framework for decision making under uncertainty. It enables a decision maker to incorporate some knowledge of the intrinsic uncertainty into the decision making process. In this dissertation, we focus on application of two-stage and multistage stochastic programming approaches to electric energy systems modeling and optimization. Particularly, we develop models and algorithms addressing the sustainability and reliability issues in power systems. First, we consider how to improve the reliability of power systems under severe failures or contingencies prone to cascading blackouts by so called islanding operations. We present a two-stage stochastic mixed-integer model to find optimal islanding operations as a powerful preventive action against cascading failures in case of extreme contingencies. Further, we study the properties of this problem and propose efficient solution methods to solve this problem for large-scale power systems. We present the numerical results showing the effectiveness of the model and investigate the performance of the solution methods. Next, we address the sustainability issue considering the integration of renewable energy resources into production planning of energy-intensive manufacturing industries. Recently, a growing number of manufacturing companies are considering renewable energies to meet their energy requirements to move towards green manufacturing as well as decreasing their energy costs. However, the intermittent nature of renewable energies imposes several difficulties in long term planning of how to efficiently exploit renewables. In this study, we propose a scheme for manufacturing companies to use onsite and grid renewable energies provided by their own investments and energy utilities as well as conventional grid energy to satisfy their energy requirements. We propose a multistage stochastic programming model and study an efficient solution method to solve this problem. We examine the proposed framework on a test case simulated based on a real-world semiconductor company. Moreover, we evaluate long-term profitability of such scheme via so called value of multistage stochastic programming.
Strategies and trajectories of coral reef fish larvae optimizing self-recruitment.
Irisson, Jean-Olivier; LeVan, Anselme; De Lara, Michel; Planes, Serge
2004-03-21
Like many marine organisms, most coral reef fishes have a dispersive larval phase. The fate of this phase is of great concern for their ecology as it may determine population demography and connectivity. As direct study of the larval phase is difficult, we tackle the question of dispersion from an opposite point of view and study self-recruitment. In this paper, we propose a mathematical model of the pelagic phase, parameterized by a limited number of factors (currents, predator and prey distributions, energy budgets) and which focuses on the behavioral response of the larvae to these factors. We evaluate optimal behavioral strategies of the larvae (i.e. strategies that maximize the probability of return to the natal reef) and examine the trajectories of dispersal that they induce. Mathematically, larval behavior is described by a controlled Markov process. A strategy induces a sequence, indexed by time steps, of "decisions" (e.g. looking for food, swimming in a given direction). Biological, physical and topographic constraints are captured through the transition probabilities and the sets of possible decisions. Optimal strategies are found by means of the so-called stochastic dynamic programming equation. A computer program is developed and optimal decisions and trajectories are numerically derived. We conclude that this technique can be considered as a good tool to represent plausible larval behaviors and that it has great potential in terms of theoretical investigations and also for field applications.
A disturbance based control/structure design algorithm
NASA Technical Reports Server (NTRS)
Mclaren, Mark D.; Slater, Gary L.
1989-01-01
Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.
Vieluf, Solveig; Sleimen-Malkoun, Rita; Voelcker-Rehage, Claudia; Jirsa, Viktor; Reuter, Eva-Maria; Godde, Ben; Temprado, Jean-Jacques; Huys, Raoul
2017-07-01
From the conceptual and methodological framework of the dynamical systems approach, force control results from complex interactions of various subsystems yielding observable behavioral fluctuations, which comprise both deterministic (predictable) and stochastic (noise-like) dynamical components. Here, we investigated these components contributing to the observed variability in force control in groups of participants differing in age and expertise level. To this aim, young (18-25 yr) as well as late middle-aged (55-65 yr) novices and experts (precision mechanics) performed a force maintenance and a force modulation task. Results showed that whereas the amplitude of force variability did not differ across groups in the maintenance tasks, in the modulation task it was higher for late middle-aged novices than for experts and higher for both these groups than for young participants. Within both tasks and for all groups, stochastic fluctuations were lowest where the deterministic influence was smallest. However, although all groups showed similar dynamics underlying force control in the maintenance task, a group effect was found for deterministic and stochastic fluctuations in the modulation task. The latter findings imply that both components were involved in the observed group differences in the variability of force fluctuations in the modulation task. These findings suggest that between groups the general characteristics of the dynamics do not differ in either task and that force control is more affected by age than by expertise. However, expertise seems to counteract some of the age effects. NEW & NOTEWORTHY Stochastic and deterministic dynamical components contribute to force production. Dynamical signatures differ between force maintenance and cyclic force modulation tasks but hardly between age and expertise groups. Differences in both stochastic and deterministic components are associated with group differences in behavioral variability, and observed behavioral variability is more strongly task dependent than person dependent. Copyright © 2017 the American Physiological Society.
Nemo: an evolutionary and population genetics programming framework.
Guillaume, Frédéric; Rougemont, Jacques
2006-10-15
Nemo is an individual-based, genetically explicit and stochastic population computer program for the simulation of population genetics and life-history trait evolution in a metapopulation context. It comes as both a C++ programming framework and an executable program file. Its object-oriented programming design gives it the flexibility and extensibility needed to implement a large variety of forward-time evolutionary models. It provides developers with abstract models allowing them to implement their own life-history traits and life-cycle events. Nemo offers a large panel of population models, from the Island model to lattice models with demographic or environmental stochasticity and a variety of already implemented traits (deleterious mutations, neutral markers and more), life-cycle events (mating, dispersal, aging, selection, etc.) and output operators for saving data and statistics. It runs on all major computer platforms including parallel computing environments. The source code, binaries and documentation are available under the GNU General Public License at http://nemo2.sourceforge.net.
Stochastic hyperfine interactions modeling library
NASA Astrophysics Data System (ADS)
Zacate, Matthew O.; Evenson, William E.
2011-04-01
The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized; however, there was a need to develop supplementary code to find an orthonormal set of (left and right) eigenvectors of complex, non-Hermitian matrices. In addition, example code is provided to illustrate the use of SHIML to generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A can be neglected. Program summaryProgram title: SHIML Catalogue identifier: AEIF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 3 No. of lines in distributed program, including test data, etc.: 8224 No. of bytes in distributed program, including test data, etc.: 312 348 Distribution format: tar.gz Programming language: C Computer: Any Operating system: LINUX, OS X RAM: Varies Classification: 7.4 External routines: TAPP [1], BLAS [2], a C-interface to BLAS [3], and LAPACK [4] Nature of problem: In condensed matter systems, hyperfine methods such as nuclear magnetic resonance (NMR), Mössbauer effect (ME), muon spin rotation (μSR), and perturbed angular correlation spectroscopy (PAC) measure electronic and magnetic structure within Angstroms of nuclear probes through the hyperfine interaction. When interactions fluctuate at rates comparable to the time scale of a hyperfine method, there is a loss in signal coherence, and spectra are damped. The degree of damping can be used to determine fluctuation rates, provided that theoretical expressions for spectra can be derived for relevant physical models of the fluctuations. SHIML provides routines to help researchers quickly develop code to incorporate stochastic models of fluctuating hyperfine interactions in calculations of hyperfine spectra. Solution method: Calculations are based on the method for modeling stochastic hyperfine interactions for PAC by Winkler and Gerdau [5]. The method is extended to include other hyperfine methods following the work of Dattagupta [6]. The code provides routines for reading model information from text files, allowing researchers to develop new models quickly without the need to modify computer code for each new model to be considered. Restrictions: In the present version of the code, only methods that measure the hyperfine interaction on one probe spin state, such as PAC, μSR, and NMR, are supported. Running time: Varies
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Probabilistic DHP adaptive critic for nonlinear stochastic control systems.
Herzallah, Randa
2013-06-01
Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained. Copyright © 2013 Elsevier Ltd. All rights reserved.
Coupled stochastic soil moisture simulation-optimization model of deficit irrigation
NASA Astrophysics Data System (ADS)
Alizadeh, Hosein; Mousavi, S. Jamshid
2013-07-01
This study presents an explicit stochastic optimization-simulation model of short-term deficit irrigation management for large-scale irrigation districts. The model which is a nonlinear nonconvex program with an economic objective function is built on an agrohydrological simulation component. The simulation component integrates (1) an explicit stochastic model of soil moisture dynamics of the crop-root zone considering interaction of stochastic rainfall and irrigation with shallow water table effects, (2) a conceptual root zone salt balance model, and 3) the FAO crop yield model. Particle Swarm Optimization algorithm, linked to the simulation component, solves the resulting nonconvex program with a significantly better computational performance compared to a Monte Carlo-based implicit stochastic optimization model. The model has been tested first by applying it in single-crop irrigation problems through which the effects of the severity of water deficit on the objective function (net benefit), root-zone water balance, and irrigation water needs have been assessed. Then, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. While the maximum net benefit has been obtained for a stress-avoidance (SA) irrigation policy, the highest water profitability has been resulted when only about 60% of the water used in the SA policy is applied. The DAID with respectively 33% of total cultivated area and 37% of total applied water has produced only 14% of the total net benefit due to low-valued crops and adverse soil and shallow water table conditions.
NASA Astrophysics Data System (ADS)
Adams, Mike; Smalian, Silva
2017-09-01
For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.
Program manual for ASTOP, an Arbitrary space trajectory optimization program
NASA Technical Reports Server (NTRS)
Horsewood, J. L.
1974-01-01
The ASTOP program (an Arbitrary Space Trajectory Optimization Program) designed to generate optimum low-thrust trajectories in an N-body field while satisfying selected hardware and operational constraints is presented. The trajectory is divided into a number of segments or arcs over which the control is held constant. This constant control over each arc is optimized using a parameter optimization scheme based on gradient techniques. A modified Encke formulation of the equations of motion is employed. The program provides a wide range of constraint, end conditions, and performance index options. The basic approach is conducive to future expansion of features such as the incorporation of new constraints and the addition of new end conditions.
Using stochastic dynamic programming to support catchment-scale water resources management in China
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Pereira-Cardenal, Silvio Javier; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2013-04-01
A hydro-economic modelling approach is used to optimize reservoir management at river basin level. We demonstrate the potential of this integrated approach on the Ziya River basin, a complex basin on the North China Plain south-east of Beijing. The area is subject to severe water scarcity due to low and extremely seasonal precipitation, and the intense agricultural production is highly dependent on irrigation. Large reservoirs provide water storage for dry months while groundwater and the external South-to-North Water Transfer Project are alternative sources of water. An optimization model based on stochastic dynamic programming has been developed. The objective function is to minimize the total cost of supplying water to the users, while satisfying minimum ecosystem flow constraints. Each user group (agriculture, domestic and industry) is characterized by fixed demands, fixed water allocation costs for the different water sources (surface water, groundwater and external water) and fixed costs of water supply curtailment. The multiple reservoirs in the basin are aggregated into a single reservoir to reduce the dimensions of decisions. Water availability is estimated using a hydrological model. The hydrological model is based on the Budyko framework and is forced with 51 years of observed daily rainfall and temperature data. 23 years of observed discharge from an in-situ station located downstream a remote mountainous catchment is used for model calibration. Runoff serial correlation is described by a Markov chain that is used to generate monthly runoff scenarios to the reservoir. The optimal costs at a given reservoir state and stage were calculated as the minimum sum of immediate and future costs. Based on the total costs for all states and stages, water value tables were generated which contain the marginal value of stored water as a function of the month, the inflow state and the reservoir state. The water value tables are used to guide allocation decisions in simulation mode. The performance of the operation rules based on water value tables was evaluated. The approach was used to assess the performance of alternative development scenarios and infrastructure projects successfully in the case study region.
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
It Takes a Village: Network Effects on Rural Education in Afghanistan. PRGS Dissertation
ERIC Educational Resources Information Center
Hoover, Matthew Amos
2014-01-01
Often, development organizations confront a tradeoff between program priorities and operational constraints. These constraints may be financial, capacity, or logistical; regardless, the tradeoff often requires sacrificing portions of a program. This work is concerned with figuring out how, when constrained, an organization or program manager can…
DOT National Transportation Integrated Search
2003-01-01
This study evaluated existing traffic signal optimization programs including Synchro,TRANSYT-7F, and genetic algorithm optimization using real-world data collected in Virginia. As a first step, a microscopic simulation model, VISSIM, was extensively ...
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments
NASA Astrophysics Data System (ADS)
Kang, Yuncheol; Prabhu, Vittal
The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
Multiscale Hy3S: hybrid stochastic simulation for supercomputers.
Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2006-02-24
Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.
NASA Astrophysics Data System (ADS)
Kefayati, Mahdi; Baldick, Ross
2015-07-01
Flexible loads, i.e. the loads whose power trajectory is not bound to a specific one, constitute a sizable portion of current and future electric demand. This flexibility can be used to improve the performance of the grid, should the right incentives be in place. In this paper, we consider the optimal decision making problem faced by a flexible load, demanding a certain amount of energy over its availability period, subject to rate constraints. The load is also capable of providing ancillary services (AS) by decreasing or increasing its consumption in response to signals from the independent system operator (ISO). Under arbitrarily distributed and correlated Markovian energy and AS prices, we obtain the optimal policy for minimising expected total cost, which includes cost of energy and benefits from AS provision, assuming no capacity reservation requirement for AS provision. We also prove that the optimal policy has a multi-threshold form and can be computed, stored and operated efficiently. We further study the effectiveness of our proposed optimal policy and its impact on the grid. We show that, while optimal simultaneous consumption and AS provision under real-time stochastic prices are achievable with acceptable computational burden, the impact of adopting such real-time pricing schemes on the network might not be as good as suggested by the majority of the existing literature. In fact, we show that such price responsive loads are likely to induce peak-to-average ratios much more than what is observed in the current distribution networks and adversely affect the grid.
Stochastic Computations in Cortical Microcircuit Models
Maass, Wolfgang
2013-01-01
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taranenko, Y.; Barnes, C.
1996-12-31
This paper deals with further developments of the new theory that applies stochastic differential geometry (SDG) to dynamics of interest rates. We examine mathematical constraints on the evolution of interest rate volatilities that arise from stochastic differential calculus under assumptions of an arbitrage free evolution of zero coupon bonds and developed markets (i.e., none of the party/factor can drive the whole market). The resulting new theory incorporates the Heath-Jarrow-Morton (HJM) model of interest rates and provides new equations for volatilities which makes the system of equations for interest rates and volatilities complete and self consistent. It results in much smallermore » amount of volatility data that should be guessed for the SDG model as compared to the HJM model. Limited analysis of the market volatility data suggests that the assumption of the developed market is violated around maturity of two years. Such maturities where the assumptions of the SDG model are violated are suggested to serve as boundaries at which volatilities should be specified independently from the model. Our numerical example with two boundaries (two years and five years) qualitatively resembles the market behavior. Under some conditions solutions of the SDG model become singular that may indicate market crashes. More detail comparison with the data is needed before the theory can be established or refuted.« less
Stochastic resonance in a tumor-immune system subject to bounded noises and time delay
NASA Astrophysics Data System (ADS)
Guo, Wei; Mei, Dong-Cheng
2014-12-01
Immunotherapy is one of the most recent approaches in cancer therapy. A mathematical model of tumor-immune interaction, subject to a periodic immunotherapy treatment (imitated by a periodic signal), correlative and bounded stochastic fluctuations and time delays, is investigated by numerical simulations for its signal power amplification (SPA). Within the tailored parameter regime, the synchronous response of tumor growth to the immunotherapy, stochastic resonance (SR), versus both the noises and delays is obtained. The details are as follows (i) the peak values of SPA versus the noise intensity (A) in the proliferation term of tumor cells decrease as the frequency of periodic signal increases, i.e. an increase of the frequency restrains the SR; (ii) an increase of the amplitude of periodic signal restrains the SR versus A, but boosts up the SR versus the noise intensity B in the immune term; (iii) there is an optimum cross-correlated degree between the two bounded noises, at which the system exhibits the strongest SR versus the delay time τα(the reaction time of tumor cell population to their surrounding environment constraints); (iv) upon increasing the delay time τα, double SR versus the delay time τβ (the time taken by both the tumor antigen identification and tumor-stimulated proliferation of effectors) emerges. These results may be helpful for an immunotherapy treatment for the sufferer.
Johnson, Paul; Howell, Sydney; Duck, Peter
2017-08-13
A mixed financial/physical partial differential equation (PDE) can optimize the joint earnings of a single wind power generator (WPG) and a generic energy storage device (ESD). Physically, the PDE includes constraints on the ESD's capacity, efficiency and maximum speeds of charge and discharge. There is a mean-reverting daily stochastic cycle for WPG power output. Physically, energy can only be produced or delivered at finite rates. All suppliers must commit hourly to a finite rate of delivery C , which is a continuous control variable that is changed hourly. Financially, we assume heavy 'system balancing' penalties in continuous time, for deviations of output rate from the commitment C Also, the electricity spot price follows a mean-reverting stochastic cycle with a strong evening peak, when system balancing penalties also peak. Hence the economic goal of the WPG plus ESD, at each decision point, is to maximize expected net present value (NPV) of all earnings (arbitrage) minus the NPV of all expected system balancing penalties, along all financially/physically feasible future paths through state space. Given the capital costs for the various combinations of the physical parameters, the design and operating rules for a WPG plus ESD in a finite market may be jointly optimizable.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Haunert, J.-H.; Plümer, L.
2017-10-01
3D city and building models according to CityGML encode the geometry, represent the structure and model semantically relevant building parts such as doors, windows and balconies. Building information models support the building design, construction and the facility management. In contrast to CityGML, they include also objects which cannot be observed from the outside. The three dimensional indoor models characterize a missing link between both worlds. Their derivation, however, is expensive. The semantic automatic interpretation of 3D point clouds of indoor environments is a methodically demanding task. The data acquisition is costly and difficult. The laser scanners and image-based methods require the access to every room. Based on an approach which does not require an additional geometry acquisition of building indoors, we propose an attempt for filling the gaps between 3D building models and building information models. Based on sparse observations such as the building footprint and room areas, 3D indoor models are generated using combinatorial and stochastic reasoning. The derived models are expanded by a-priori not observable structures such as electric installation. Gaussian mixtures, linear and bi-linear constraints are used to represent the background knowledge and structural regularities. The derivation of hypothesised models is performed by stochastic reasoning using graphical models, Gauss-Markov models and MAP-estimators.
An observational method for fast stochastic X-ray polarimetry timing
NASA Astrophysics Data System (ADS)
Ingram, Adam R.; Maccarone, Thomas J.
2017-11-01
The upcoming launch of the first space based X-ray polarimeter in ˜40 yr will provide powerful new diagnostic information to study accreting compact objects. In particular, analysis of rapid variability of the polarization degree and angle will provide the opportunity to probe the relativistic motions of material in the strong gravitational fields close to the compact objects, and enable new methods to measure black hole and neutron star parameters. However, polarization properties are measured in a statistical sense, and a statistically significant polarization detection requires a fairly long exposure, even for the brightest objects. Therefore, the sub-minute time-scales of interest are not accessible using a direct time-resolved analysis of polarization degree and angle. Phase-folding can be used for coherent pulsations, but not for stochastic variability such as quasi-periodic oscillations. Here, we introduce a Fourier method that enables statistically robust detection of stochastic polarization variability for arbitrarily short variability time-scales. Our method is analogous to commonly used spectral-timing techniques. We find that it should be possible in the near future to detect the quasi-periodic swings in polarization angle predicted by Lense-Thirring precession of the inner accretion flow. This is contingent on the mean polarization degree of the source being greater than ˜4-5 per cent, which is consistent with the best current constraints on Cygnus X-1 from the late 1970s.
NASA Astrophysics Data System (ADS)
Bäumer, Richard; Terrill, Richard; Wollnack, Simon; Werner, Herbert; Starossek, Uwe
2018-01-01
The twin rotor damper (TRD), an active mass damper, uses the centrifugal forces of two eccentrically rotating control masses. In the continuous rotation mode, the preferred mode of operation, the two eccentric control masses rotate with a constant angular velocity about two parallel axes, creating, under further operational constraints, a harmonic control force in a single direction. In previous theoretical work, it was shown that this mode of operation is effective for the damping of large, harmonic vibrations of a single degree of freedom (SDOF) oscillator. In this paper, the SDOF oscillator is assumed to be affected by a stochastic excitation force and consequently responds with several frequencies. Therefore, the TRD must deviate from the continuous rotation mode to ensure the anti-phasing between the harmonic control force of the TRD and the velocity of the SDOF oscillator. It is found that the required deviation from the continuous rotation mode increases with lower vibration amplitude. Therefore, an operation of the TRD in the continuous rotation mode is no longer efficient below a specific vibration-amplitude threshold. To additionally dampen vibrations below this threshold, the TRD can switch to another, more energy-consuming mode of operation, the swinging mode in which both control masses oscillate about certain angular positions. A power-efficient control algorithm is presented which uses the continuous rotation mode for large vibrations and the swinging mode for small vibrations. To validate the control algorithm, numerical and experimental investigations are performed for a single degree of freedom oscillator under stochastic excitation. Using both modes of operation, it is shown that the control algorithm is effective for the cases of free and stochastically forced vibrations of arbitrary amplitude.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
Integrated Control Using the SOFFT Control Structure
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1996-01-01
The need for integrated/constrained control systems has become clearer as advanced aircraft introduced new coupled subsystems such as new propulsion subsystems with thrust vectoring and new aerodynamic designs. In this study, we develop an integrated control design methodology which accomodates constraints among subsystem variables while using the Stochastic Optimal Feedforward/Feedback Control Technique (SOFFT) thus maintaining all the advantages of the SOFFT approach. The Integrated SOFFT Control methodology uses a centralized feedforward control and a constrained feedback control law. The control thus takes advantage of the known coupling among the subsystems while maintaining the identity of subsystems for validation purposes and the simplicity of the feedback law to understand the system response in complicated nonlinear scenarios. The Variable-Gain Output Feedback Control methodology (including constant gain output feedback) is extended to accommodate equality constraints. A gain computation algorithm is developed. The designer can set the cross-gains between two variables or subsystems to zero or another value and optimize the remaining gains subject to the constraint. An integrated control law is designed for a modified F-15 SMTD aircraft model with coupled airframe and propulsion subsystems using the Integrated SOFFT Control methodology to produce a set of desired flying qualities.
Evaluation of Electric Power Procurement Strategies by Stochastic Dynamic Programming
NASA Astrophysics Data System (ADS)
Saisho, Yuichi; Hayashi, Taketo; Fujii, Yasumasa; Yamaji, Kenji
In deregulated electricity markets, the role of a distribution company is to purchase electricity from the wholesale electricity market at randomly fluctuating prices and to provide it to its customers at a given fixed price. Therefore the company has to take risk stemming from the uncertainties of electricity prices and/or demand fluctuation instead of the customers. The way to avoid the risk is to make a bilateral contact with generating companies or install its own power generation facility. This entails the necessity to develop a certain method to make an optimal strategy for electric power procurement. In such a circumstance, this research has the purpose for proposing a mathematical method based on stochastic dynamic programming and additionally considering the characteristics of the start-up cost of electric power generation facility to evaluate strategies of combination of the bilateral contract and power auto-generation with its own facility for procuring electric power in deregulated electricity market. In the beginning we proposed two approaches to solve the stochastic dynamic programming, and they are a Monte Carlo simulation method and a finite difference method to derive the solution of a partial differential equation of the total procurement cost of electric power. Finally we discussed the influences of the price uncertainty on optimal strategies of power procurement.
Joint Chance-Constrained Dynamic Programming
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob
2012-01-01
This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.
King, Laurie A; Horak, Fay B
2009-01-01
This article introduces a new framework for therapists to develop an exercise program to delay mobility disability in people with Parkinson disease (PD). Mobility, or the ability to efficiently navigate and function in a variety of environments, requires balance, agility, and flexibility, all of which are affected by PD. This article summarizes recent research identifying how constraints on mobility specific to PD, such as rigidity, bradykinesia, freezing, poor sensory integration, inflexible program selection, and impaired cognitive processing, limit mobility in people with PD. Based on these constraints, a conceptual framework for exercises to maintain and improve mobility is presented. An example of a constraint-focused agility exercise program, incorporating movement principles from tai chi, kayaking, boxing, lunges, agility training, and Pilates exercises, is presented. This new constraint-focused agility exercise program is based on a strong scientific framework and includes progressive levels of sensorimotor, resistance, and coordination challenges that can be customized for each patient while maintaining fidelity. Principles for improving mobility presented here can be incorporated into an ongoing or long-term exercise program for people with PD. PMID:19228832
King, Laurie A; Horak, Fay B
2009-04-01
This article introduces a new framework for therapists to develop an exercise program to delay mobility disability in people with Parkinson disease (PD). Mobility, or the ability to efficiently navigate and function in a variety of environments, requires balance, agility, and flexibility, all of which are affected by PD. This article summarizes recent research identifying how constraints on mobility specific to PD, such as rigidity, bradykinesia, freezing, poor sensory integration, inflexible program selection, and impaired cognitive processing, limit mobility in people with PD. Based on these constraints, a conceptual framework for exercises to maintain and improve mobility is presented. An example of a constraint-focused agility exercise program, incorporating movement principles from tai chi, kayaking, boxing, lunges, agility training, and Pilates exercises, is presented. This new constraint-focused agility exercise program is based on a strong scientific framework and includes progressive levels of sensorimotor, resistance, and coordination challenges that can be customized for each patient while maintaining fidelity. Principles for improving mobility presented here can be incorporated into an ongoing or long-term exercise program for people with PD.
A heuristic constraint programmed planner for deep space exploration problems
NASA Astrophysics Data System (ADS)
Jiang, Xiao; Xu, Rui; Cui, Pingyuan
2017-10-01
In recent years, the increasing numbers of scientific payloads and growing constraints on the probe have made constraint processing technology a hotspot in the deep space planning field. In the procedure of planning, the ordering of variables and values plays a vital role. This paper we present two heuristic ordering methods for variables and values. On this basis a graphplan-like constraint-programmed planner is proposed. In the planner we convert the traditional constraint satisfaction problem to a time-tagged form with different levels. Inspired by the most constrained first principle in constraint satisfaction problem (CSP), the variable heuristic is designed by the number of unassigned variables in the constraint and the value heuristic is designed by the completion degree of the support set. The simulation experiments show that the planner proposed is effective and its performance is competitive with other kind of planners.
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.
2016-01-01
The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.
Economic consequences of paratuberculosis control in dairy cattle: A stochastic modeling study.
Smith, R L; Al-Mamun, M A; Gröhn, Y T
2017-03-01
The cost of paratuberculosis to dairy herds, through decreased milk production, early culling, and poor reproductive performance, has been well-studied. The benefit of control programs, however, has been debated. A recent stochastic compartmental model for paratuberculosis transmission in US dairy herds was modified to predict herd net present value (NPV) over 25 years in herds of 100 and 1000 dairy cattle with endemic paratuberculosis at initial prevalence of 10% and 20%. Control programs were designed by combining 5 tests (none, fecal culture, ELISA, PCR, or calf testing), 3 test-related culling strategies (all test-positive, high-positive, or repeated positive), 2 test frequencies (annual and biannual), 3 hygiene levels (standard, moderate, or improved), and 2 cessation decisions (testing ceased after 5 negative whole-herd tests or testing continued). Stochastic dominance was determined for each herd scenario; no control program was fully dominant for maximizing herd NPV in any scenario. Use of the ELISA test was generally preferred in all scenarios, but no paratuberculosis control was highly preferred for the small herd with 10% initial prevalence and was frequently preferred in other herd scenarios. Based on their effect on paratuberculosis alone, hygiene improvements were not found to be as cost-effective as test-and-cull strategies in most circumstances. Global sensitivity analysis found that economic parameters, such as the price of milk, had more influence on NPV than control program-related parameters. We conclude that paratuberculosis control can be cost effective, and multiple control programs can be applied for equivalent economic results. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Cong; Zhu, Ying; Zeng, Xueting; Huang, Guohe; Li, Yongping
2018-07-15
Contradictions of increasing carbon mitigation pressure and electricity demand have been aggravated significantly. A heavy emphasis is placed on analyzing the carbon mitigation potential of electric energy systems via tradable green certificates (TGC). This study proposes a tradable green certificate (TGC)-fractional fuzzy stochastic robust optimization (FFSRO) model through integrating fuzzy possibilistic, two-stage stochastic and stochastic robust programming techniques into a linear fractional programming framework. The framework can address uncertainties expressed as stochastic and fuzzy sets, and effectively deal with issues of multi-objective tradeoffs between the economy and environment. The proposed model is applied to the major economic center of China, the Beijing-Tianjin-Hebei region. The generated results of proposed model indicate that a TGC mechanism is a cost-effective pathway to cope with carbon reduction and support the sustainable development pathway of electric energy systems. In detail, it can: (i) effectively promote renewable power development and reduce fossil fuel use; (ii) lead to higher CO 2 mitigation potential than non-TGC mechanism; and (iii) greatly alleviate financial pressure on the government to provide renewable energy subsidies. The TGC-FFSRO model can provide a scientific basis for making related management decisions of electric energy systems. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrotra, Sanjay
2016-09-07
The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting ourmore » main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.« less
Runway Operations Planning: A Two-Stage Heuristic Algorithm
NASA Technical Reports Server (NTRS)
Anagnostakis, Ioannis; Clarke, John-Paul
2003-01-01
The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.
Control of Networked Traffic Flow Distribution - A Stochastic Distribution System Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Aziz, H M Abdul; Young, Stan
Networked traffic flow is a common scenario for urban transportation, where the distribution of vehicle queues either at controlled intersections or highway segments reflect the smoothness of the traffic flow in the network. At signalized intersections, the traffic queues are controlled by traffic signal control settings and effective traffic lights control would realize both smooth traffic flow and minimize fuel consumption. Funded by the Energy Efficient Mobility Systems (EEMS) program of the Vehicle Technologies Office of the US Department of Energy, we performed a preliminary investigation on the modelling and control framework in context of urban network of signalized intersections.more » In specific, we developed a recursive input-output traffic queueing models. The queue formation can be modeled as a stochastic process where the number of vehicles entering each intersection is a random number. Further, we proposed a preliminary B-Spline stochastic model for a one-way single-lane corridor traffic system based on theory of stochastic distribution control.. It has been shown that the developed stochastic model would provide the optimal probability density function (PDF) of the traffic queueing length as a dynamic function of the traffic signal setting parameters. Based upon such a stochastic distribution model, we have proposed a preliminary closed loop framework on stochastic distribution control for the traffic queueing system to make the traffic queueing length PDF follow a target PDF that potentially realizes the smooth traffic flow distribution in a concerned corridor.« less
Do rational numbers play a role in selection for stochasticity?
Sinclair, Robert
2014-01-01
When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.
Research Breathes New Life Into Senior Travel Program.
ERIC Educational Resources Information Center
Blazey, Michael
1986-01-01
A survey of older citizens concerning travel interests revealed constraints to participation in a travel program. A description is given of how research on attitudes and life styles indicated ways in which these constraints could be lessened. (JD)
Stochastic dynamic programming illuminates the link between environment, physiology, and evolution.
Mangel, Marc
2015-05-01
I describe how stochastic dynamic programming (SDP), a method for stochastic optimization that evolved from the work of Hamilton and Jacobi on variational problems, allows us to connect the physiological state of organisms, the environment in which they live, and how evolution by natural selection acts on trade-offs that all organisms face. I first derive the two canonical equations of SDP. These are valuable because although they apply to no system in particular, they share commonalities with many systems (as do frictionless springs). After that, I show how we used SDP in insect behavioral ecology. I describe the puzzles that needed to be solved, the SDP equations we used to solve the puzzles, and the experiments that we used to test the predictions of the models. I then briefly describe two other applications of SDP in biology: first, understanding the developmental pathways followed by steelhead trout in California and second skipped spawning by Norwegian cod. In both cases, modeling and empirical work were closely connected. I close with lessons learned and advice for the young mathematical biologists.
Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gupta, Manish
1992-01-01
Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
Economic efficiency and risk character of fire management programs, Northern Rocky Mountains
Thomas J. Mills; Frederick W. Bratten
1988-01-01
Economic efficiency and risk have long been considered during the selection of fire management programs and the design of fire management polices. The risk considerations was largely subjective, however, and efficiency has only recently been calculated for selected portions of the fire management program. The highly stochastic behavior of the fire system and the high...
Learning abstract visual concepts via probabilistic program induction in a Language of Thought.
Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T
2017-11-01
The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.
Essays on variational approximation techniques for stochastic optimization problems
NASA Astrophysics Data System (ADS)
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.
MCdevelop - a universal framework for Stochastic Simulations
NASA Astrophysics Data System (ADS)
Slawinska, M.; Jadach, S.
2011-03-01
We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 136 No. of bytes in distributed program, including test data, etc.: 355 698 Distribution format: tar.gz Programming language: ANSI C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system. Operating system: Most UNIX systems, Linux. The application programs were thoroughly tested under Ubuntu 7.04, 8.04 and CERN Scientific Linux 5. Has the code been vectorised or parallelised?: Tools (scripts) for optional parallelisation on a PC farm are included. RAM: 500 bytes Classification: 11.3 External routines: ROOT package version 5.0 or higher ( http://root.cern.ch/drupal/). Nature of problem: Developing any type of stochastic simulation program for high energy physics and other areas. Solution method: Object Oriented programming in C++ with added persistency mechanism, batch scripts for running on PC farms and Autotools.
Scale-invariance underlying the logistic equation and its social applications
NASA Astrophysics Data System (ADS)
Hernando, A.; Plastino, A.
2013-01-01
On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.
2003-04-01
any of the P interfering sources, and Hkt i (1) (P)] T is defined below. The P-variate vector = t kt , • t J consists of complex waveforms radiated by...line. More precisely, the (i, j ) t element of the matrix Hke is a complex 4-4 coefficient which is practically constant over the kth PRI, and is a...multivariate auto-regressive (AR) model of order n: Ykt + Z Bj Yk- j , t = tkt (25) j =l In the above equation, Bj are the M-variate matrices which are the
Grand unification scale primordial black holes: consequences and constraints.
Anantua, Richard; Easther, Richard; Giblin, John T
2009-09-11
A population of very light primordial black holes which evaporate before nucleosynthesis begins is unconstrained unless the decaying black holes leave stable relics. We show that gravitons Hawking radiated from these black holes would source a substantial stochastic background of high frequency gravititational waves (10(12) Hz or more) in the present Universe. These black holes may lead to a transient period of matter-dominated expansion. In this case the primordial Universe could be temporarily dominated by large clusters of "Hawking stars" and the resulting gravitational wave spectrum is independent of the initial number density of primordial black holes.
Duality in non-linear programming
NASA Astrophysics Data System (ADS)
Jeyalakshmi, K.
2018-04-01
In this paper we consider duality and converse duality for a programming problem involving convex objective and constraint functions with finite dimensional range. We do not assume any constraint qualification. The dual is presented by reducing the problem to a standard Lagrange multiplier problem.
The sequence relay selection strategy based on stochastic dynamic programming
NASA Astrophysics Data System (ADS)
Zhu, Rui; Chen, Xihao; Huang, Yangchao
2017-07-01
Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.
Factors leading to different viability predictions for a grizzly bear data set
Mills, L.S.; Hayes, S.G.; Wisdom, M.J.; Citta, J.; Mattson, D.J.; Murphy, K.
1996-01-01
Population viability analysis programs are being used increasingly in research and management applications, but there has not been a systematic study of the congruence of different program predictions based on a single data set. We performed such an analysis using four population viability analysis computer programs: GAPPS, INMAT, RAMAS/AGE, and VORTEX. The standardized demographic rates used in all programs were generalized from hypothetical increasing and decreasing grizzly bear (Ursus arctos horribilis) populations. Idiosyncracies of input format for each program led to minor differences in intrinsic growth rates that translated into striking differences in estimates of extinction rates and expected population size. In contrast, the addition of demographic stochasticity, environmental stochasticity, and inbreeding costs caused only a small divergence in viability predictions. But, the addition of density dependence caused large deviations between the programs despite our best attempts to use the same density-dependent functions. Population viability programs differ in how density dependence is incorporated, and the necessary functions are difficult to parameterize accurately. Thus, we recommend that unless data clearly suggest a particular density-dependent model, predictions based on population viability analysis should include at least one scenario without density dependence. Further, we describe output metrics that may differ between programs; development of future software could benefit from standardized input and output formats across different programs.
Ennis, Erin J; Foley, Joe P
2016-07-15
A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach
NASA Astrophysics Data System (ADS)
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-07-01
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-07-13
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
Solving a Class of Stochastic Mixed-Integer Programs With Branch and Price
2006-01-01
a two-dimensional knapsack problem, but for a given m, the objective value gi does not depend on the variance index v. This will be used in a final...optimization. Journal of Multicriteria Decision Analysis 11, 139–150 (2002) 29. Ford, L.R., Fulkerson, D.R.: A suggested computation for the maximal...for solution by a branch-and-price algorithm (B&P). We then survey a number of examples, and use a stochastic facility-location problem (SFLP) for a
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
NASA Astrophysics Data System (ADS)
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
Casein Kinase II Regulation of the Hot1 Transcription Factor Promotes Stochastic Gene Expression*
Burns, Laura T.; Wente, Susan R.
2014-01-01
In Saccharomyces cerevisiae, Hog1 MAPK is activated and induces a transcriptional program in response to hyperosmotic stress. Several Hog1-responsive genes exhibit stochastic transcription, resulting in cell-to-cell variability in mRNA and protein levels. However, the mechanisms governing stochastic gene activity are not fully defined. Here we uncover a novel role for casein kinase II (CK2) in the cellular response to hyperosmotic stress. CK2 interacts with and phosphorylates the Hot1 transcription factor; however, Hot1 phosphorylation is not sufficient for controlling the stochastic response. The CK2 protein itself is required to negatively regulate mRNA expression of Hot1-responsive genes and Hot1 enrichment at target promoters. Single-cell gene expression analysis reveals altered activation of Hot1-targeted STL1 in ck2 mutants, resulting in a bimodal to unimodal shift in expression. Together, this work reveals a novel CK2 function during the hyperosmotic stress response that promotes cell-to-cell variability in gene expression. PMID:24817120
Teaching People to Manage Constraints: Effects on Creative Problem-Solving
ERIC Educational Resources Information Center
Peterson, David R.; Barrett, Jamie D.; Hester, Kimberly S.; Robledo, Issac C.; Hougen, Dean F.; Day, Eric A.; Mumford, Michael D.
2013-01-01
Constraints often inhibit creative problem-solving. This study examined the impact of training strategies for managing constraints on creative problem-solving. Undergraduates, 218 in all, were asked to work through 1 to 4 self-paced instructional programs focused on constraint management strategies. The quality, originality, and elegance of…
Solution of Stochastic Capital Budgeting Problems in a Multidivisional Firm.
1980-06-01
linear programming with simple recourse (see, for example, Dantzig (9) or Ziemba (35)) - 12 - and has been applied to capital budgeting problems with...New York, 1972 34. Weingartner, H.M., Mathematical Programming and Analysis of Capital Budgeting Problems, Markham Pub. Co., Chicago, 1967 35. Ziemba
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Optimal route discovery for soft QOS provisioning in mobile ad hoc multimedia networks
NASA Astrophysics Data System (ADS)
Huang, Lei; Pan, Feng
2007-09-01
In this paper, we propose an optimal routing discovery algorithm for ad hoc multimedia networks whose resource keeps changing, First, we use stochastic models to measure the network resource availability, based on the information about the location and moving pattern of the nodes, as well as the link conditions between neighboring nodes. Then, for a certain multimedia packet flow to be transmitted from a source to a destination, we formulate the optimal soft-QoS provisioning problem as to find the best route that maximize the probability of satisfying its desired QoS requirements in terms of the maximum delay constraints. Based on the stochastic network resource model, we developed three approaches to solve the formulated problem: A centralized approach serving as the theoretical reference, a distributed approach that is more suitable to practical real-time deployment, and a distributed dynamic approach that utilizes the updated time information to optimize the routing for each individual packet. Examples of numerical results demonstrated that using the route discovered by our distributed algorithm in a changing network environment, multimedia applications could achieve better QoS statistically.
Non-Pharmacological Countermeasure to Decrease Landing Sickness and Improve Functional Performance
NASA Technical Reports Server (NTRS)
Rosenberg, M. J. F.; Kreutzberg, G. A.; Galvan-Garza, R. C.; Mulavara, A. P.; Reschke, M. F.
2017-01-01
Upon return from long-duration spaceflight, 100% of crewmembers experience motion sickness (MS) symptoms. The interactions between crewmembers' adaptation to a gravitational transition, the performance decrements resulting from MS and/or use of promethazine (PMZ), and the constraints imposed by mission task demands could significantly challenge and limit an astronaut's ability to perform functional tasks during gravitational transitions. Stochastic resonance (SR) is "noise benefit": adding noise to a system might increase the information (examples to the left and above). Stochastic vestibular stimulation (SVS), or low levels of noise applied to the vestibular system, improves balance and locomotor performance (Goel et al. 2015, Mulavara et al. 2011, 2015). In hemi-lesioned rat models, Samoudi et al. 2012 found that SVS increased GABA release on the lesioned, but not the intact side. Activation of the GABA pathway is important in modulating MS and promoting adaptability (Cohen 2008) and was seen to reverse MS symptoms in rats after unilateral labyrinthectomy (Magnusson et al. 2000). Thus, SVS could be used to promote GABA pathways to reduce MS and promote adaptability, eliminate the need for PMZ or other performance-inhibiting drugs.
Where do the Field Plots Belong? A Multiple-Constraint Sampling Design for the BigFoot Project
NASA Astrophysics Data System (ADS)
Kennedy, R. E.; Cohen, W. B.; Kirschbaum, A. A.; Gower, S. T.
2002-12-01
A key component of a MODIS validation project is effective characterization of biophysical measures on the ground. Fine-grain ecological field measurements must be placed strategically to capture variability at the scale of the MODIS imagery. Here we describe the BigFoot project's revised sampling scheme, designed to simultaneously meet three important goals: capture landscape variability, avoid spatial autocorrelation between field plots, and minimize time and expense of field sampling. A stochastic process places plots in clumped constellations to reduce field sampling costs, while minimizing spatial autocorrelation. This stochastic process is repeated, creating several hundred realizations of plot constellations. Each constellation is scored and ranked according to its ability to match landscape variability in several Landsat-based spectral indices, and its ability to minimize field sampling costs. We show how this approach has recently been used to place sample plots at the BigFoot project's two newest study areas, one in a desert system and one in a tundra system. We also contrast this sampling approach to that already used at the four prior BigFoot project sites.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard
2008-02-01
In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.
NASA Astrophysics Data System (ADS)
Menafoglio, A.; Guadagnini, A.; Secchi, P.
2016-08-01
We address the problem of stochastic simulation of soil particle-size curves (PSCs) in heterogeneous aquifer systems. Unlike traditional approaches that focus solely on a few selected features of PSCs (e.g., selected quantiles), our approach considers the entire particle-size curves and can optionally include conditioning on available data. We rely on our prior work to model PSCs as cumulative distribution functions and interpret their density functions as functional compositions. We thus approximate the latter through an expansion over an appropriate basis of functions. This enables us to (a) effectively deal with the data dimensionality and constraints and (b) to develop a simulation method for PSCs based upon a suitable and well defined projection procedure. The new theoretical framework allows representing and reproducing the complete information content embedded in PSC data. As a first field application, we demonstrate the quality of unconditional and conditional simulations obtained with our methodology by considering a set of particle-size curves collected within a shallow alluvial aquifer in the Neckar river valley, Germany.
NASA Astrophysics Data System (ADS)
Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.
2012-10-01
A singular boundary value problem for a second-order linear integrodifferential equation with Volterra and non-Volterra integral operators is formulated and analyzed. The equation is defined on ℝ+, has a weak singularity at zero and a strong singularity at infinity, and depends on several positive parameters. Under natural constraints on the coefficients of the equation, existence and uniqueness theorems for this problem with given limit boundary conditions at singular points are proved, asymptotic representations of the solution are given, and an algorithm for its numerical determination is described. Numerical computations are performed and their interpretation is given. The problem arises in the study of the survival probability of an insurance company over infinite time (as a function of its initial surplus) in a dynamic insurance model that is a modification of the classical Cramer-Lundberg model with a stochastic process rate of premium under a certain investment strategy in the financial market. A comparative analysis of the results with those produced by the model with deterministic premiums is given.
Benedek, C; Descombes, X; Zerubia, J
2012-01-01
In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.
Simulation of a proposed emergency outlet from Devils Lake, North Dakota
Vecchia, Aldo V.
2002-01-01
From 1993 to 2001, Devils Lake rose more than 25 feet, flooding farmland, roads, and structures around the lake and causing more than $400 million in damages in the Devils Lake Basin. In July 2001, the level of Devils Lake was at 1,448.0 feet above sea level1, which was the highest lake level in more than 160 years. The lake could continue to rise to several feet above its natural spill elevation to the Sheyenne River (1,459 feet above sea level) in future years, causing extensive additional flooding in the basin and, in the event of an uncontrolled natural spill, downstream in the Red River of the North Basin as well. The outlet simulation model described in this report was developed to determine the potential effects of various outlet alternatives on the future lake levels and water quality of Devils Lake.Lake levels of Devils Lake are controlled largely by precipitation on the lake surface, evaporation from the lake surface, and surface inflow. For this study, a monthly water-balance model was developed to compute the change in total volume of Devils Lake, and a regression model was used to estimate monthly water-balance data on the basis of limited recorded data. Estimated coefficients for the regression model indicated fitted precipitation on the lake surface was greater than measured precipitation in most months, fitted evaporation from the lake surface was less than estimated evaporation in most months, and ungaged inflow was about 2 percent of gaged inflow in most months. Dissolved sulfate was considered to be the key water-quality constituent for evaluating the effects of a proposed outlet on downstream water quality. Because large differences in sulfate concentrations existed among the various bays of Devils Lake, monthly water-balance data were used to develop detailed water and sulfate mass-balance models to compute changes in sulfate load for each of six major storage compartments in response to precipitation, evaporation, inflow, and outflow from each compartment. The storage compartments--five for Devils Lake and one for Stump Lake--were connected by bridge openings, culverts, or natural channels that restricted mixing between compartments. A numerical algorithm was developed to calculate inflow and outflow from each compartment. Sulfate loads for the storage compartments first were calculated using the assumptions that no interaction occurred between the bottom sediments and the water column and no wind- or buoyancy-induced mixing occurred between compartments. However, because the fitted sulfate loads did not agree with the estimated sulfate loads, which were obtained from recorded sulfate concentrations, components were added to the sulfate mass-balance model to account for the flux of sulfate between bottom sediments and the lake and for mixing between storage compartments. Mixing between compartments can occur during periods of open water because of wind and during periods of ice cover because of water-density differences between compartments. Sulfate loads calculated using the sulfate mass-balance model with sediment interaction and mixing between compartments closely matched sulfate loads computed from historical concentrations. The water and sulfate mass-balance models were used to calculate potential future lake levels and sulfate concentrations for Devils Lake and Stump Lake given potential future values of monthly precipitation, evaporation, and inflow. Potential future inputs were generated using a scenario approach and a stochastic approach. In the scenario approach, historical values of precipitation, evaporation, and inflow were repeated in the future for a particular sequence of historical years. In the stochastic approach, a statistical time-series model was developed to randomly generate potential future inputs. The scenario approach was used to evaluate the effectiveness of various outlet alternatives, and the stochastic approach was used to evaluate the hydrologic and water-quality effects of the potential outlet alternatives that were selected on the basis of the scenario analysis. Given potential future lake levels and sulfate concentrations generated using either the scenario or stochastic approach and potential future ambient flows and sulfate concentrations for the Sheyenne River receiving waters, daily outlet discharges could be calculated for virtually any outlet alternative. For the scenario approach, future ambient flows and sulfate concentrations for the Sheyenne River were generated using the same sequence of years used for generating water-balance data for Devils Lake. For the stochastic approach, a procedure was developed for generating daily Sheyenne River flows and sulfate concentrations that were "in-phase" with the generated water-balance data for Devils Lake. Simulation results for the scenario approach indicated that neither of the West Bay outlet alternatives provided effective flood-damage reduction without exceeding downstream water-quality constraints. However, both Pelican Lake outlet alternatives provided significant flood-damage reduction with only minor downstream water-quality changes. The most effective alternative for controlling rising lake levels was a Pelican Lake outlet with a 480-cubic-foot-per-second pump capacity and a 250-milligram-per-liter downstream sulfate constraint. However, this plan is costly because of the high pump capacity and the requirement of a control structure on Highway 19 to control the level of Pelican Lake. A less costly, though less effective for flood-damage reduction, plan is a Pelican Lake outlet with a 300-cubic-foot-per-second pump capacity and a 250-milligram-per-liter downstream sulfate constraint. The plan is less costly because the pump capacity is smaller and because the control structure on Highway 19 is not required. The less costly Pelican Lake alternative with a 450-milligramper- liter downstream sulfate constraint rather than a 250-milligram-per-liter downstream sulfate constraint was identified by the U.S. Army Corps of Engineers as the preferred alternative for detailed design and engineering analysis. Simulation results for the stochastic approach indicated that the geologic history of lake-level fluctuations of Devils Lake for the past 2,500 years was consistent with a climatic history that consisted of two climate states--a wet state, similar to conditions during 1980-99, and a normal state, similar to conditions during 1950-78. The transition times between the wet and normal climatic periods occurred randomly. The average duration of the wet climatic periods was 20 years, and the average duration of the normal climatic periods was 120 years. The stochastic approach was used to generate 10,000 independent sequences of lake levels and sulfate concentrations for Devils Lake for water years 2001-50. Each trace began with the same starting conditions, and the duration of the current wet cycle was generated randomly for each trace. Each trace was generated for the baseline (natural) condition and for the Pelican Lake outlet with a 300-cubic-foot-per-second pump capacity and a 450-milligram-per-liter downstream sulfate constraint. The outlet significantly lowered the probabilities of future lake-level increases within the next 50 years and did not substantially increase the probabilities of reaching low lake levels or poor water-quality conditions during the same period.
ERIC Educational Resources Information Center
Elsherif, Entisar
2017-01-01
This adaptive methodological inquiry explored the affordances and constraints of one TESOL teacher education program in Libya as a conflict zone. Data was collected through seven documents and 33 questionnaires. Questionnaires were gathered from the investigated program's teacher-educators, student-teachers, and graduates, who were in-service…
Investigation of air transportation technology at Princeton University, 1990-1991
NASA Technical Reports Server (NTRS)
Stengel, Robert F.
1991-01-01
The Air Transportation Technology Program at Princeton University is a program that emphasizes graduate and undergraduate student research. The program proceeded along six avenues during the past year: microburst hazards to aircraft, intelligent failure tolerant control, computer-aided heuristics for piloted flight, stochastic robustness of flight control systems, neural networks for flight control, and computer-aided control system design.
NASA Astrophysics Data System (ADS)
Lu, M.; Lall, U.
2013-12-01
In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Preemptive spatial competition under a reproduction-mortality constraint.
Allstadt, Andrew; Caraco, Thomas; Korniss, G
2009-06-21
Spatially structured ecological interactions can shape selection pressures experienced by a population's different phenotypes. We study spatial competition between phenotypes subject to antagonistic pleiotropy between reproductive effort and mortality rate. The constraint we invoke reflects a previous life-history analysis; the implied dependence indicates that although propagation and mortality rates both vary, their ratio is fixed. We develop a stochastic invasion approximation predicting that phenotypes with higher propagation rates will invade an empty environment (no biotic resistance) faster, despite their higher mortality rate. However, once population density approaches demographic equilibrium, phenotypes with lower mortality are favored, despite their lower propagation rate. We conducted a set of pairwise invasion analyses by simulating an individual-based model of preemptive competition. In each case, the phenotype with the lowest mortality rate and (via antagonistic pleiotropy) the lowest propagation rate qualified as evolutionarily stable among strategies simulated. This result, for a fixed propagation to mortality ratio, suggests that a selective response to spatial competition can extend the time scale of the population's dynamics, which in turn decelerates phenotypic evolution.
A Comparison of Techniques for Scheduling Earth-Observing Satellites
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2004-01-01
Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.
Bershtein, Shimon; Serohijos, Adrian W.R.; Shakhnovich, Eugene I.
2016-01-01
Bridging the gap between the molecular properties of proteins and organismal/population fitness is essential for understanding evolutionary processes. This task requires the integration of the several physical scales of biological organization, each defined by a distinct set of mechanisms and constraints, into a single unifying model. The molecular scale is dominated by the constraints imposed by the physico-chemical properties of proteins and their substrates, which give rise to trade-offs and epistatic (non-additive) effects of mutations. At the systems scale, biological networks modulate protein expression and can either buffer or enhance the fitness effects of mutations. The population scale is influenced by the mutational input, selection regimes, and stochastic changes affecting the size and structure of populations, which eventually determine the evolutionary fate of mutations. Here, we summarize the recent advances in theory, computer simulations, and experiments that advance our understanding of the links between various physical scales in biology. PMID:27810574
Bershtein, Shimon; Serohijos, Adrian Wr; Shakhnovich, Eugene I
2017-02-01
Bridging the gap between the molecular properties of proteins and organismal/population fitness is essential for understanding evolutionary processes. This task requires the integration of the several physical scales of biological organization, each defined by a distinct set of mechanisms and constraints, into a single unifying model. The molecular scale is dominated by the constraints imposed by the physico-chemical properties of proteins and their substrates, which give rise to trade-offs and epistatic (non-additive) effects of mutations. At the systems scale, biological networks modulate protein expression and can either buffer or enhance the fitness effects of mutations. The population scale is influenced by the mutational input, selection regimes, and stochastic changes affecting the size and structure of populations, which eventually determine the evolutionary fate of mutations. Here, we summarize the recent advances in theory, computer simulations, and experiments that advance our understanding of the links between various physical scales in biology. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chance-Constrained System of Systems Based Operation of Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kargarian, Amin; Fu, Yong; Wu, Hongyu
In this paper, a chance-constrained system of systems (SoS) based decision-making approach is presented for stochastic scheduling of power systems encompassing active distribution grids. Based on the concept of SoS, the independent system operator (ISO) and distribution companies (DISCOs) are modeled as self-governing systems. These systems collaborate with each other to run the entire power system in a secure and economic manner. Each self-governing system accounts for its local reserve requirements and line flow constraints with respect to the uncertainties of load and renewable energy resources. A set of chance constraints are formulated to model the interactions between the ISOmore » and DISCOs. The proposed model is solved by using analytical target cascading (ATC) method, a distributed optimization algorithm in which only a limited amount of information is exchanged between collaborative ISO and DISCOs. In this paper, a 6-bus and a modified IEEE 118-bus power systems are studied to show the effectiveness of the proposed algorithm.« less
Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu; Jablonowski, Christopher; Lake, Larry
Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum designmore » concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.« less
Stochastic kinetic mean field model
NASA Astrophysics Data System (ADS)
Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.
2016-07-01
This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.
Dal Palù, Alessandro; Pontelli, Enrico; He, Jing; Lu, Yonggang
2007-01-01
The paper describes a novel framework, constructed using Constraint Logic Programming (CLP) and parallelism, to determine the association between parts of the primary sequence of a protein and alpha-helices extracted from 3D low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Boore, David M.
2000-01-01
A simple and powerful method for simulating ground motions is based on the assumption that the amplitude of ground motion at a site can be specified in a deterministic way, with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers, and it is widely used to predict ground motions for regions of the world in which recordings of motion from damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms that can be used to predict ground motions. SMSIM is a set of programs for simulating ground motions based on the stochastic method. This Open-File Report is a revision of an earlier report (Boore, 1996) describing a set of programs for simulating ground motions from earthquakes. The programs are based on modifications I have made to the stochastic method first introduced by Hanks and McGuire (1981). The report contains source codes, written in Fortran, and executables that can be used on a PC. Programs are included both for time-domain and for random vibration simulations. In addition, programs are included to produce Fourier amplitude spectra for the models used in the simulations and to convert shear velocity vs. depth into frequency-dependent amplification. The revision to the previous report is needed because the input and output files have changed significantly, and a number of new programs have been included in the set.
Constraint Programming to Solve Maximal Density Still Life
NASA Astrophysics Data System (ADS)
Chu, Geoffrey; Petrie, Karen Elizabeth; Yorke-Smith, Neil
The Maximum Density Still Life problem fills a finite Game of Life board with a stable pattern of cells that has as many live cells as possible. Although simple to state, this problem is computationally challenging for any but the smallest sizes of board. Especially difficult is to prove that the maximum number of live cells has been found. Various approaches have been employed. The most successful are approaches based on Constraint Programming (CP). We describe the Maximum Density Still Life problem, introduce the concept of constraint programming, give an overview on how the problem can be modelled and solved with CP, and report on best-known results for the problem.
A new model to predict weak-lensing peak counts. II. Parameter constraint strategies
NASA Astrophysics Data System (ADS)
Lin, Chieh-An; Kilbinger, Martin
2015-11-01
Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.
Advanced data assimilation in strongly nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Miller, Robert N.; Ghil, Michael; Gauthiez, Francois
1994-01-01
Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence. Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided. Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method--based on an empirical statistical model derived from a Monte Carlo simulation--is formulated, and shown to work very well. Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.
Effluent trading in river systems through stochastic decision-making process: a case study.
Zolfagharipoor, Mohammad Amin; Ahmadi, Azadeh
2017-09-01
The objective of this paper is to provide an efficient framework for effluent trading in river systems. The proposed framework consists of two pessimistic and optimistic decision-making models to increase the executability of river water quality trading programs. The models used for this purpose are (1) stochastic fallback bargaining (SFB) to reach an agreement among wastewater dischargers and (2) stochastic multi-criteria decision-making (SMCDM) to determine the optimal treatment strategy. The Monte-Carlo simulation method is used to incorporate the uncertainty into analysis. This uncertainty arises from stochastic nature and the errors in the calculation of wastewater treatment costs. The results of river water quality simulation model are used as the inputs of models. The proposed models are used in a case study on the Zarjoub River in northern Iran to determine the best solution for the pollution load allocation. The best treatment alternatives selected by each model are imported, as the initial pollution discharge permits, into an optimization model developed for trading of pollution discharge permits among pollutant sources. The results show that the SFB-based water pollution trading approach reduces the costs by US$ 14,834 while providing a relative consensus among pollutant sources. Meanwhile, the SMCDM-based water pollution trading approach reduces the costs by US$ 218,852, but it is less acceptable by pollutant sources. Therefore, it appears that giving due attention to stability, or in other words acceptability of pollution trading programs for all pollutant sources, is an essential element of their success.
Stochastic gravitational waves from cosmic string loops in scaling
NASA Astrophysics Data System (ADS)
Ringeval, Christophe; Suyama, Teruaki
2017-12-01
If cosmic strings are formed in the early universe, their associated loops emit gravitational waves during the whole cosmic history and contribute to the stochastic gravitational wave background at all frequencies. We provide a new estimate of the stochastic gravitational wave spectrum by considering a realistic cosmological loop distribution, in scaling, as it can be inferred from Nambu-Goto numerical simulations. Our result takes into account various effects neglected so far. We include both gravitational wave emission and backreaction effects on the loop distribution and show that they produce two distinct features in the spectrum. Concerning the string microstructure, in addition to the presence of cusps and kinks, we show that gravitational wave bursts created by the collision of kinks could dominate the signal for wiggly strings, a situation which may be favoured in the light of recent numerical simulations. In view of these new results, we propose four prototypical scenarios, within the margin of the remaining theoretical uncertainties, for which we derive the corresponding signal and estimate the constraints on the string tension put by both the LIGO and European Pulsar Timing Array (EPTA) observations. The less constrained of these scenarios is shown to have a string tension GU <= 7.2 × 10‑11, at 95% of confidence. Smooth loops carrying two cusps per oscillation verify the two-sigma bound GU <= 1.0 × 10‑11 while the most constrained of all scenarios describes very kinky loops and satisfies GU <= 6.7× 10‑14 at 95% of confidence.
Potential landscape and flux field theory for turbulence and nonequilibrium fluid systems
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhang, Feng; Wang, Jin
2018-02-01
Turbulence is a paradigm for far-from-equilibrium systems without time reversal symmetry. To capture the nonequilibrium irreversible nature of turbulence and investigate its implications, we develop a potential landscape and flux field theory for turbulent flow and more general nonequilibrium fluid systems governed by stochastic Navier-Stokes equations. We find that equilibrium fluid systems with time reversibility are characterized by a detailed balance constraint that quantifies the detailed balance condition. In nonequilibrium fluid systems with nonequilibrium steady states, detailed balance breaking leads directly to a pair of interconnected consequences, namely, the non-Gaussian potential landscape and the irreversible probability flux, forming a 'nonequilibrium trinity'. The nonequilibrium trinity characterizes the nonequilibrium irreversible essence of fluid systems with intrinsic time irreversibility and is manifested in various aspects of these systems. The nonequilibrium stochastic dynamics of fluid systems including turbulence with detailed balance breaking is shown to be driven by both the non-Gaussian potential landscape gradient and the irreversible probability flux, together with the reversible convective force and the stochastic stirring force. We reveal an underlying connection of the energy flux essential for turbulence energy cascade to the irreversible probability flux and the non-Gaussian potential landscape generated by detailed balance breaking. Using the energy flux as a center of connection, we demonstrate that the four-fifths law in fully developed turbulence is a consequence and reflection of the nonequilibrium trinity. We also show how the nonequilibrium trinity can affect the scaling laws in turbulence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
A simple model of bipartite cooperation for ecological and organizational networks.
Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian
2009-01-22
In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.
A Mars Exploration Discovery Program
NASA Astrophysics Data System (ADS)
Hansen, C. J.; Paige, D. A.
2000-07-01
The Mars Exploration Program should consider following the Discovery Program model. In the Discovery Program a team of scientists led by a PI develop the science goals of their mission, decide what payload achieves the necessary measurements most effectively, and then choose a spacecraft with the capabilities needed to carry the payload to the desired target body. The primary constraints associated with the Discovery missions are time and money. The proposer must convince reviewers that their mission has scientific merit and is feasible. Every Announcement of Opportunity has resulted in a collection of creative ideas that fit within advertised constraints. Following this model, a "Mars Discovery Program" would issue an Announcement of Opportunity for each launch opportunity with schedule constraints dictated by the launch window and fiscal constraints in accord with the program budget. All else would be left to the proposer to choose, based on the science the team wants to accomplish, consistent with the program theme of "Life, Climate and Resources". A proposer could propose a lander, an orbiter, a fleet of SCOUT vehicles or penetrators, an airplane, a balloon mission, a large rover, a small rover, etc. depending on what made the most sense for the science investigation and payload. As in the Discovery program, overall feasibility relative to cost, schedule and technology readiness would be evaluated and be part of the selection process.
A Mars Exploration Discovery Program
NASA Technical Reports Server (NTRS)
Hansen, C. J.; Paige, D. A.
2000-01-01
The Mars Exploration Program should consider following the Discovery Program model. In the Discovery Program a team of scientists led by a PI develop the science goals of their mission, decide what payload achieves the necessary measurements most effectively, and then choose a spacecraft with the capabilities needed to carry the payload to the desired target body. The primary constraints associated with the Discovery missions are time and money. The proposer must convince reviewers that their mission has scientific merit and is feasible. Every Announcement of Opportunity has resulted in a collection of creative ideas that fit within advertised constraints. Following this model, a "Mars Discovery Program" would issue an Announcement of Opportunity for each launch opportunity with schedule constraints dictated by the launch window and fiscal constraints in accord with the program budget. All else would be left to the proposer to choose, based on the science the team wants to accomplish, consistent with the program theme of "Life, Climate and Resources". A proposer could propose a lander, an orbiter, a fleet of SCOUT vehicles or penetrators, an airplane, a balloon mission, a large rover, a small rover, etc. depending on what made the most sense for the science investigation and payload. As in the Discovery program, overall feasibility relative to cost, schedule and technology readiness would be evaluated and be part of the selection process.
Symbolic PathFinder: Symbolic Execution of Java Bytecode
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Rungta, Neha
2010-01-01
Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.
Annual Review of Research Under the Joint Services Electronics Program.
1978-10-01
Electronic Science at Texas Tech University. Specific topics covered include fault analysis, Stochastic control and estimation, nonlinear control, multidimensional system theory , Optical noise, and pattern recognition.
Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis
2015-01-01
We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440
TU-AB-303-01: A Feasibility Study for Dynamic Adaptive Therapy of Non-Small Cell Lung Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, M; Phillips, M
2015-06-15
Purpose: To compare plans for NSCLC optimized using Dynamic Adaptive Therapy (DAT) with conventional IMRT optimization. DAT adapts plans based on changes in the target volume by using dynamic programing techniques to consider expected changes into the optimization process. Information gathered during treatment, e.g. from CBCT, is incorporated into the optimization. Methods and materials: DAT is formulated using stochastic control formalism, which minimizes the total expected number of tumor cells at the end of a treatment course subject to uncertainty inherent in the tumor response and organs-at-risk (OAR) dose constraints. This formulation allows for non-stationary dose distribution as well asmore » non-stationary fractional dose as needed to achieve a series of optimal plans that are conformal to tumor over time. Sixteen phantom cases with various sizes and locations of tumors, and OAR geometries were generated. Each case was planned with DAT and conventional IMRT (60Gy/30fx). Tumor volume change over time was obtained by using, daily MVCT-based, two-level cell population model. Monte Carlo simulations have been performed for each treatment course to account for uncertainty in tumor response. Same OAR dose constraints were applied for both methods. The frequency of plan modification was varied to 1, 2, 5 (weekly), and 29 (daily). The final average tumor dose and OAR doses have been compared to quantify the potential benefit of DAT. Results: The average tumor max, min, mean, and D95 resulted from DAT were 124.0–125.2%, 102.1–114.7%, 113.7–123.4%, and 102.0–115.9% (range dependent on the frequency of plan modification) of those from conventional IMRT. Cord max, esophagus max, lung mean, heart mean, and unspecified tissue D05 resulted from AT were 84–102.4%, 99.8–106.9%, 66.9–85.6%, 58.2–78.8%, and 85.2–94.0% of those from conventional IMRT. Conclusions: Significant tumor dose increase and OAR dose reduction, especially with parallel OAR with mean or dose-volume constraints, can be achieved using DAT.« less
The Development and Implementation of Outdoor-Based Secondary School Integrated Programs
ERIC Educational Resources Information Center
Comishin, Kelly; Dyment, Janet E.; Potter, Tom G.; Russell, Constance L.
2004-01-01
Four teachers share the challenges they faced when creating and running outdoor-focused secondary school integrated programs in British Columbia, Canada. The five most common challenges were funding constraints, insufficient support from administrators and colleagues, time constraints, liability and risk management, and inadequate skills and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hempling, Scott; Elefant, Carolyn; Cory, Karlynn
2010-01-01
This report details how state feed-in tariff (FIT) programs can be legally implemented and how they can comply with federal requirements. The report describes the federal constraints on FIT programs and identifies legal methods that are free of those constrains.
The Nature of Credit Constraints and Human Capital. NBER Working Paper No. 13912
ERIC Educational Resources Information Center
Lochner, Lance J.; Monge-Naranjo, Alexander
2008-01-01
This paper studies the nature and impact of credit constraints in the market for human capital. We derive endogenous constraints from the design of government student loan programs and from the limited repayment incentives in private lending markets. These constraints imply cross-sectional patterns for schooling, ability, and family income that…
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Jamaluddin, A.; Jauhari, W. A.; Saputro, D. R. S.
2017-06-01
In this study, we consider a stochastic integrated manufacturer-retailer inventory model with service level constraint. The model analyzed in this article considers the situation in which the vendor and the buyer establish a long-term contract and strategic partnership to jointly determine the best strategy. The lead time and setup cost are assumed can be controlled by an additional crashing cost and an investment, respectively. It is assumed that shortages are allowed and partially backlogged on the buyer’s side, and that the protection interval (i.e., review period plus lead time) demand distribution is unknown but has given finite first and second moments. The objective is to apply the minmax distribution free approach to simultaneously optimize the review period, the lead time, the setup cost, the safety factor, and the number of deliveries in order to minimize the joint total expected annual cost. The service level constraint guarantees that the service level requirement can be satisfied at the worst case. By constructing Lagrange function, the analysis regarding the solution procedure is conducted, and a solution algorithm is then developed. Moreover, a numerical example and sensitivity analysis are given to illustrate the proposed model and to provide some observations and managerial implications.
On the structure of solar and stellar coronae - Loops and loop heat transport
NASA Technical Reports Server (NTRS)
Litwin, Christof; Rosner, Robert
1993-01-01
We discuss the principal constraints on mechanisms for structuring and heating the outer atmospheres - the coronae - of stars. We argue that the essential cause of highly localized heating in the coronae of stars like the sun is the spatially intermittent nature of stellar surface magnetic fields, and that the spatial scale of the resulting coronal structures is related to the spatial structure of the photospheric fields. We show that significant constraints on coronal heating mechanisms derive from the observed variations in coronal emission, and, in addition, show that the observed structuring perpendicular to coronal magnetic fields imposes severe constraints on mechanisms for heat dispersal in the low-beta atmosphere. In particular, we find that most of commonly considered mechanisms for heat dispersal, such as anomalous diffusion due to plasma turbulence or magnetic field line stochasticity, are much too slow to account for the observed rapid heating of coronal loops. The most plausible mechanism appears to be reconnection at the interface between two adjacent coronal flux bundles. Based on a model invoking hyperresistivity, we show that such a mechanism naturally leads to dominance of isolated single bright coronal loops and to bright coronal plasma structures whose spatial scale transverse to the local magnetic field is comparable to observed dimensions of coronal X-ray loops.
Using Ant Colony Optimization for Routing in VLSI Chips
NASA Astrophysics Data System (ADS)
Arora, Tamanna; Moses, Melanie
2009-04-01
Rapid advances in VLSI technology have increased the number of transistors that fit on a single chip to about two billion. A frequent problem in the design of such high performance and high density VLSI layouts is that of routing wires that connect such large numbers of components. Most wire-routing problems are computationally hard. The quality of any routing algorithm is judged by the extent to which it satisfies routing constraints and design objectives. Some of the broader design objectives include minimizing total routed wire length, and minimizing total capacitance induced in the chip, both of which serve to minimize power consumed by the chip. Ant Colony Optimization algorithms (ACO) provide a multi-agent framework for combinatorial optimization by combining memory, stochastic decision and strategies of collective and distributed learning by ant-like agents. This paper applies ACO to the NP-hard problem of finding optimal routes for interconnect routing on VLSI chips. The constraints on interconnect routing are used by ants as heuristics which guide their search process. We found that ACO algorithms were able to successfully incorporate multiple constraints and route interconnects on suite of benchmark chips. On an average, the algorithm routed with total wire length 5.5% less than other established routing algorithms.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Engelund Holm, Peter; Trapp, Stefan; Rosbjerg, Dan; Bauer-Gottwein, Peter
2015-04-01
Few studies address water quality in hydro-economic models, which often focus primarily on optimal allocation of water quantities. Water quality and water quantity are closely coupled, and optimal management with focus solely on either quantity or quality may cause large costs in terms of the oth-er component. In this study, we couple water quality and water quantity in a joint hydro-economic catchment-scale optimization problem. Stochastic dynamic programming (SDP) is used to minimize the basin-wide total costs arising from water allocation, water curtailment and water treatment. The simple water quality module can handle conservative pollutants, first order depletion and non-linear reactions. For demonstration purposes, we model pollutant releases as biochemical oxygen demand (BOD) and use the Streeter-Phelps equation for oxygen deficit to compute the resulting min-imum dissolved oxygen concentrations. Inelastic water demands, fixed water allocation curtailment costs and fixed wastewater treatment costs (before and after use) are estimated for the water users (agriculture, industry and domestic). If the BOD concentration exceeds a given user pollution thresh-old, the user will need to pay for pre-treatment of the water before use. Similarly, treatment of the return flow can reduce the BOD load to the river. A traditional SDP approach is used to solve one-step-ahead sub-problems for all combinations of discrete reservoir storage, Markov Chain inflow clas-ses and monthly time steps. Pollution concentration nodes are introduced for each user group and untreated return flow from the users contribute to increased BOD concentrations in the river. The pollutant concentrations in each node depend on multiple decision variables (allocation and wastewater treatment) rendering the objective function non-linear. Therefore, the pollution concen-tration decisions are outsourced to a genetic algorithm, which calls a linear program to determine the remainder of the decision variables. This hybrid formulation keeps the optimization problem computationally feasible and represents a flexible and customizable method. The method has been applied to the Ziya River basin, an economic hotspot located on the North China Plain in Northern China. The basin is subject to severe water scarcity, and the rivers are heavily polluted with wastewater and nutrients from diffuse sources. The coupled hydro-economic optimiza-tion model can be used to assess costs of meeting additional constraints such as minimum water qual-ity or to economically prioritize investments in waste water treatment facilities based on economic criteria.
Natural environment application for NASP-X-30 design and mission planning
NASA Technical Reports Server (NTRS)
Johnson, D. L.; Hill, C. K.; Brown, S. C.; Batts, G. W.
1993-01-01
The NASA/MSFC Mission Analysis Program has recently been utilized in various National Aero-Space Plane (NASP) mission and operational planning scenarios. This paper focuses on presenting various atmospheric constraint statistics based on assumed NASP mission phases using established natural environment design, parametric, threshold values. Probabilities of no-go are calculated using atmospheric parameters such as temperature, humidity, density altitude, peak/steady-state winds, cloud cover/ceiling, thunderstorms, and precipitation. The program although developed to evaluate test or operational missions after flight constraints have been established, can provide valuable information in the design phase of the NASP X-30 program. Inputting the design values as flight constraints the Mission Analysis Program returns the probability of no-go, or launch delay, by hour by month. This output tells the X-30 program manager whether the design values are stringent enough to meet his required test flight schedules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.
1997-04-01
This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less
Programming languages for circuit design.
Pedersen, Michael; Yordanov, Boyan
2015-01-01
This chapter provides an overview of a programming language for Genetic Engineering of Cells (GEC). A GEC program specifies a genetic circuit at a high level of abstraction through constraints on otherwise unspecified DNA parts. The GEC compiler then selects parts which satisfy the constraints from a given parts database. GEC further provides more conventional programming language constructs for abstraction, e.g., through modularity. The GEC language and compiler is available through a Web tool which also provides functionality, e.g., for simulation of designed circuits.
Software Tools for Stochastic Simulations of Turbulence
2015-08-28
client interface to FTI. Specefic client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH...client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH; and two locally constructed fluid...45 4.4.2.2 FLASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4.2.3 WRF
NASA Astrophysics Data System (ADS)
Mishra, Bhavya; Schütz, Gunter M.; Chowdhury, Debashish
2016-06-01
We develop a stochastic model for the programmed frameshift of ribosomes synthesizing a protein while moving along a mRNA template. Normally the reading frame of a ribosome decodes successive triplets of nucleotides on the mRNA in a step-by-step manner. We focus on the programmed shift of the ribosomal reading frame, forward or backward, by only one nucleotide which results in a fusion protein; it occurs when a ribosome temporarily loses its grip to its mRNA track. Special “slippery” sequences of nucleotides and also downstream secondary structures of the mRNA strand are believed to play key roles in programmed frameshift. Here we explore the role of an hitherto neglected parameter in regulating -1 programmed frameshift. Specifically, we demonstrate that the frameshift frequency can be strongly regulated also by the density of the ribosomes, all of which are engaged in simultaneous translation of the same mRNA, at and around the slippery sequence. Monte Carlo simulations support the analytical predictions obtained from a mean-field analysis of the stochastic dynamics.
NASA Technical Reports Server (NTRS)
Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.
1993-01-01
Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Zheng, Mingwen; Li, Shudong; Wang, Weiping
2018-03-01
Some existing papers focused on finite-time parameter identification and synchronization, but provided incomplete theoretical analyses. Such works incorporated conflicting constraints for parameter identification, therefore, the practical significance could not be fully demonstrated. To overcome such limitations, the underlying paper presents new results of parameter identification and synchronization for uncertain complex dynamical networks with impulsive effect and stochastic perturbation based on finite-time stability theory. Novel results of parameter identification and synchronization control criteria are obtained in a finite time by utilizing Lyapunov function and linear matrix inequality respectively. Finally, numerical examples are presented to illustrate the effectiveness of our theoretical results.
NASA Astrophysics Data System (ADS)
Logunova, O. S.; Sibileva, N. S.
2017-12-01
The purpose of the study is to increase the efficiency of the steelmaking process in large capacity arc furnace on the basis of implementation a new decision-making system about the composition of charge materials. The authors proposed an interactive builder for the formation of the optimization problem, taking into account the requirements of the customer, normative documents and stocks of charge materials in the warehouse. To implement the interactive builder, the sets of deterministic and stochastic model components are developed, as well as a list of preferences of criteria and constraints.
NASA Astrophysics Data System (ADS)
Ernst, Gerhard; Hüttemann, Andreas
2010-01-01
List of contributors; 1. Introduction Gerhard Ernst and Andreas Hütteman; Part I. The Arrows of Time: 2. Does a low-entropy constraint prevent us from influencing the past? Mathias Frisch; 3. The part hypothesis meets gravity Craig Callender; 4. Quantum gravity and the arrow of time Claus Kiefer; Part II. Probability and Chance: 5. The natural-range conception of probability Jacob Rosenthal; 6. Probability in Boltzmannian statistical mechanics Roman Frigg; 7. Humean mechanics versus a metaphysics of powers Michael Esfeld; Part III. Reduction: 8. The crystallisation of Clausius's phenomenological thermodynamics C. Ulises Moulines; 9. Reduction and renormalization Robert W. Batterman; 10. Irreversibility in stochastic dynamics Jos Uffink; Index.
Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties
NASA Astrophysics Data System (ADS)
Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing
This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.
NASA Astrophysics Data System (ADS)
Woldeyesus, Tibebe Argaw
Water supply constraints can significantly restrict electric power generation, and such constraints are expected to worsen with future climate change. The overarching goal of this thesis is to incorporate stochastic water-climate interactions into electricity portfolio models and evaluate various pathways for water savings in co-managed water-electric utilities. Colorado Springs Utilities (CSU) is used as a case study to explore the above issues. The thesis consists of three objectives: Characterize seasonality of water withdrawal intensity factors (WWIF) for electric power generation and develop a risk assessment framework due to water shortages; Incorporate water constraints into electricity portfolio models and evaluate the impact of varying capital investments (both power generation and cooling technologies) on water use and greenhouse gas emissions; Compare the unit cost and overall water savings from both water and electric sectors in co-managed utilities to facilitate overall water management. This thesis provided the first discovery and characterization of seasonality of WWIF with distinct summertime and wintertime variations of +/-17% compared to the power plant average (0.64gal/kwh) which itself is found to be significantly higher than the literature average (0.53gal/kwh). Both the streamflow and WWIF are found to be highly correlated with monthly average temperature (r-sq = 89%) and monthly precipitation (r-sq of 38%) enabling stochastic simulation of future WWIF under moderate climate change scenario. Future risk to electric power generation also showed the risk to be underestimated significantly when using either the literature average or the power plant average WWIF. Seasonal variation in WWIF along with seasonality in streamflow, electricity demand and other municipal water demands along with storage are shown to be important factors for more realistic risk estimation. The unlimited investment in power generation and/or cooling technologies is also found to save water and GHG emissions by 68% and 75% respectively at a marginal levelized cost increase of 12%. In contrast, the zero investment scenarios (which optimizes exiting technologies to address water scarcity constraints on power generation) shows 50% water savings and 23% GHG emissions reduction at a relatively high marginal levelized cost increase of 37%. Water saving strategies in electric sector show very high cost of water savings (48,000 and 200,000)/Mgal-year under unlimited investment and zero investment scenarios respectively, but they have greater water saving impacts of 6% to CSU municipal water demand; while the individual water saving strategies from water sector have low cost of water savings ranging from (37-1,500)/Mgal-year but with less than 0.5% water reduction impact to CSU due to their low penetration. On the other hand, use of reclaimed water for power plant cooling systems have shown great water savings of up to 92% against the BAU and cost of water saving from (0-73,000)/Mgal-year when integrated with unlimited investment and zero investment water minimizing scenarios respectively in the electric sector. Overall, cities need to focus primarily on use of reclaimed water and in new generation technologies' investment including cooling system retrofits while focusing on expanding the penetration rate of individual water saving strategies in the water sector.
WINDOWAC (Wing Design Optimization With Aeroelastic Constraints): Program manual
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Starnes, J. H., Jr.
1974-01-01
User and programer documentation for the WIDOWAC programs is given. WIDOWAC may be used for the design of minimum mass wing structures subjected to flutter, strength, and minimum gage constraints. The wing structure is modeled by finite elements, flutter conditions may be both subsonic and supersonic, and mathematical programing methods are used for the optimization procedure. The user documentation gives general directions on how the programs may be used and describes their limitations; in addition, program input and output are described, and example problems are presented. A discussion of computational algorithms and flow charts of the WIDOWAC programs and major subroutines is also given.
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
Area of Stochastic Scrape-Off Layer for a Single-Null Divertor Tokamak Using Simple Map
NASA Astrophysics Data System (ADS)
Fisher, Tiffany; Verma, Arun; Punjabi, Alkesh
1996-11-01
The magnetic topology of a single-null divertor tokamak is represented by Simple Map (Punjabi A, Verma A and Boozer A, Phys Rev Lett), 69, 3322 (1992) and J Plasma Phys, 52, 91 (1994). The Simple map is characterized by a single parameter k representing the toroidal asymmetry. The width of the stochastic scrape-off layer and its area varies with the map parameter k. We calculate the area of the stochastic scrape-off layer for different k's and obtain a parametric expression for the area in terms of k and y _LastGoodSurface(k). This work is supported by US DOE OFES. Tiffany Fisher is a HU CFRT Summer Fusion High school Workshop Scholar from New Bern High School in North Carolina. She is supported by NASA SHARP Plus Program.
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
Exploring information transmission in gene networks using stochastic simulation and machine learning
NASA Astrophysics Data System (ADS)
Park, Kyemyung; Prüstel, Thorsten; Lu, Yong; Narayanan, Manikandan; Martins, Andrew; Tsang, John
How gene regulatory networks operate robustly despite environmental fluctuations and biochemical noise is a fundamental question in biology. Mathematically the stochastic dynamics of a gene regulatory network can be modeled using chemical master equation (CME), but nonlinearity and other challenges render analytical solutions of CMEs difficult to attain. While approaches of approximation and stochastic simulation have been devised for simple models, obtaining a more global picture of a system's behaviors in high-dimensional parameter space without simplifying the system substantially remains a major challenge. Here we present a new framework for understanding and predicting the behaviors of gene regulatory networks in the context of information transmission among genes. Our approach uses stochastic simulation of the network followed by machine learning of the mapping between model parameters and network phenotypes such as information transmission behavior. We also devised ways to visualize high-dimensional phase spaces in intuitive and informative manners. We applied our approach to several gene regulatory circuit motifs, including both feedback and feedforward loops, to reveal underexplored aspects of their operational behaviors. This work is supported by the Intramural Program of NIAID/NIH.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas
2015-03-01
Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.
NASA Astrophysics Data System (ADS)
Brewster, J.; Oware, E. K.
2017-12-01
Groundwater hosted in fractured rocks constitutes almost 65% of the principal aquifers in the US. The exploitation and contaminant management of fractured aquifers require fracture flow and transport modeling, which in turn requires a detailed understanding of the structure of the aquifer. The widely used equivalent porous medium approach to modeling fractured aquifer systems is inadequate to accurately predict fracture transport processes due to the averaging of the sharp lithological contrast between the matrix and the fractures. The potential of geophysical imaging (GI) to estimate spatially continuous subsurface profiles in a minimally invasive fashion is well proven. Conventional deterministic GI strategies, however, produce geologically unrealistic, smoothed-out results due to commonly enforced smoothing constraints. Stochastic GI of fractured aquifers is becoming increasing appealing due to its ability to recover realistic fracture features while providing multiple likely realizations that enable uncertainty assessment. Generating prior spatial features consistent with the expected target structures is crucial in stochastic imaging. We propose to utilize eigenvalue ratios to resolve the elongated fracture features expected in a fractured aquifer system. Eigenvalues capture the major and minor directions of variability in a region, which can be employed to evaluate shape descriptors, such as eccentricity (elongation) and orientation of features in the region. Eccentricity ranges from zero to one, representing a circularly sharped to a line feature, respectively. Here, we apply eigenvalue ratios to define a joint objective parameter consisting of eccentricity (shape) and direction terms to guide the generation of prior fracture-like features in some predefined principal directions for stochastic GI. Preliminary unconditional, synthetic experiments reveal the potential of the algorithm to simulate prior fracture-like features. We illustrate the strategy with a 2D, cross-borehole electrical resistivity tomography (ERT) in a fractured aquifer at the UB Environmental Geophysics Imaging Site, with tomograms validated with gamma and caliper logs obtained from the two ERT wells.
Chance Constrained Programming Methods in Probabilistic Programming.
1982-03-01
Financial and Quantitative Analysis 2, 1967. Also reproduced in R. F. Byrne et. al., eds.5tudies in Budgeting (Amsterdam: North Holland, 1971 ). [3...Rules for the E-Model of Chance-Constrained Programming," Management Science, 17, 1971 . [23] Garstka, S. J. "The Economic Equivalence of Several...Iowa City: The University of Iowa College of Business Administration, 1981). -3- (29] Kall , P. and A. Prekopa, eds, Recent Results in Stochastic
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.
2016-01-01
Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.
Precision orbit raising trajectories. [solar electric propulsion orbital transfer program
NASA Technical Reports Server (NTRS)
Flanagan, P. F.; Horsewood, J. L.; Pines, S.
1975-01-01
A precision trajectory program has been developed to serve as a test bed for geocentric orbit raising steering laws. The steering laws to be evaluated have been developed using optimization methods employing averaging techniques. This program provides the capability of testing the steering laws in a precision simulation. The principal system models incorporated in the program are described, including the radiation environment, the solar array model, the thrusters and power processors, the geopotential, and the solar system. Steering and array orientation constraints are discussed, and the impact of these constraints on program design is considered.
Issues and Strategies in Solving Multidisciplinary Optimization Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya
2013-01-01
Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
NASA Technical Reports Server (NTRS)
Kiusalaas, J.; Reddy, G. B.
1977-01-01
A finite element program is presented for computer-automated, minimum weight design of elastic structures with constraints on stresses (including local instability criteria) and displacements. Volume 1 of the report contains the theoretical and user's manual of the program. Sample problems and the listing of the program are included in Volumes 2 and 3. The element subroutines are organized so as to facilitate additions and changes by the user. As a result, a relatively minor programming effort would be required to make DESAP 1 into a special purpose program to handle the user's specific design requirements and failure criteria.
Reactive power planning under high penetration of wind energy using Benders decomposition
Xu, Yan; Wei, Yanli; Fang, Xin; ...
2015-11-05
This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less
Economic Impact of Harvesting Corn Stover under Time Constraint: The Case of North Dakota
Maung, Thein A.; Gustafson, Cole R.
2013-01-01
This study examines the impact of stochastic harvest field time on profit maximizing potential of corn cob/stover collection in North Dakota. Three harvest options are analyzed using mathematical programming models. Our findings show that under the first corn grain only harvest option, farmers are able to complete harvesting corn grain and achieve maximum net income in a fairly short amount of time with existing combine technology. However, under the second simultaneous corn grain and cob (one-pass) harvest option, farmers generate lower net income compared to the net income of the first option. This is due to the slowdown in combinemore » harvest capacity as a consequence of harvesting corn cobs. Under the third option of separate corn grain and stover (two-pass) harvest option, time allocation is the main challenge and our evidence shows that with limited harvest field time available, farmers find it optimal to allocate most of their time harvesting grain and then proceed to harvest and bale stover if time permits at the end of harvest season. The overall findings suggest is that it would be more economically efficient to allow a firm that is specialized in collecting biomass feedstock to participate in cob/stover harvest business.« less
A Hybrid Interval-Robust Optimization Model for Water Quality Management.
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-05-01
In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.
Economic Impact of Harvesting Corn Stover under Time Constraint: The Case of North Dakota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maung, Thein A.; Gustafson, Cole R.
This study examines the impact of stochastic harvest field time on profit maximizing potential of corn cob/stover collection in North Dakota. Three harvest options are analyzed using mathematical programming models. Our findings show that under the first corn grain only harvest option, farmers are able to complete harvesting corn grain and achieve maximum net income in a fairly short amount of time with existing combine technology. However, under the second simultaneous corn grain and cob (one-pass) harvest option, farmers generate lower net income compared to the net income of the first option. This is due to the slowdown in combinemore » harvest capacity as a consequence of harvesting corn cobs. Under the third option of separate corn grain and stover (two-pass) harvest option, time allocation is the main challenge and our evidence shows that with limited harvest field time available, farmers find it optimal to allocate most of their time harvesting grain and then proceed to harvest and bale stover if time permits at the end of harvest season. The overall findings suggest is that it would be more economically efficient to allow a firm that is specialized in collecting biomass feedstock to participate in cob/stover harvest business.« less
Trubitsyn, A G
2009-01-01
The age-dependent degradation of all vital processes of an organism can be result of influences of destructive factors (the stochastic mechanism of aging), or effect of realizations of the genetic program (phenoptosis). The stochastic free-radical theory of aging dominating now contradicts the set of empirical data, and the semicentenial attempts to create the means to slow down aging did not give any practical results. It makes obvious that the stochastic mechanism of aging is incorrect. At the same time, the alternative mechanism of the programmed aging is not developed yet but preconditions for it development have already been created. It is shown that the genes controlling process of aging exist (contrary to the customary opinion) and the increase in the level of damaged macromolecules (basic postulate of the free-radical theory) can be explained by programmed attenuation of bio-energetics. As the bio-energetics is a driving force of all vital processes, decrease of its level is capable to cause degradation of all functions of an organism. However to transform this postulate into a basis of the theory of phenoptosis it is necessary to show, that attenuation of bio-energetics predetermines such fundamental processes accompanying aging as decrease of the overall rate of protein biosynthesis, restriction of cellular proliferations (Hayflick limit), loss of telomeres etc. This article is the first step in this direction: the natural mechanism of interaction of overall rate of protein synthesis with a level of cellular bio-energetics is shown. This is built-in into the translation machine and based on dependence of recirculation rate of eukaryotic initiation factor 2 (elF2) from ATP/ADP value that is created by mitochondrial bio-energetic machine.
Educational Policy and Literacy Learning in an ESL Classroom: Constraints and Opportunities
ERIC Educational Resources Information Center
Ricklefs, Mariana Alvayero
2012-01-01
This dissertation was a qualitative case study of an educational program for English Language Learners (ELL) at an elementary school in a small city in the Midwest. This case study investigated how language ideologies influence the constraints and opportunities for the planning and execution of this educational program. The findings evidenced that…
Darmon, Nicole; Ferguson, Elaine L; Briend, André
2002-12-01
Economic constraints may contribute to the unhealthy food choices observed among low socioeconomic groups in industrialized countries. The objective of the present study was to predict the food choices a rational individual would make to reduce his or her food budget, while retaining a diet as close as possible to the average population diet. Isoenergetic diets were modeled by linear programming. To ensure these diets were consistent with habitual food consumption patterns, departure from the average French diet was minimized and constraints that limited portion size and the amount of energy from food groups were introduced into the models. A cost constraint was introduced and progressively strengthened to assess the effect of cost on the selection of foods by the program. Strengthening the cost constraint reduced the proportion of energy contributed by fruits and vegetables, meat and dairy products and increased the proportion from cereals, sweets and added fats, a pattern similar to that observed among low socioeconomic groups. This decreased the nutritional quality of modeled diets, notably the lowest cost linear programming diets had lower vitamin C and beta-carotene densities than the mean French adult diet (i.e., <25% and 10% of the mean density, respectively). These results indicate that a simple cost constraint can decrease the nutrient densities of diets and influence food selection in ways that reproduce the food intake patterns observed among low socioeconomic groups. They suggest that economic measures will be needed to effectively improve the nutritional quality of diets consumed by these populations.
Mathematical programming for the efficient allocation of health care resources.
Stinnett, A A; Paltiel, A D
1996-10-01
Previous discussions of methods for the efficient allocation of health care resources subject to a budget constraint have relied on unnecessarily restrictive assumptions. This paper makes use of established optimization techniques to demonstrate that a general mathematical programming framework can accommodate much more complex information regarding returns to scale, partial and complete indivisibility and program interdependence. Methods are also presented for incorporating ethical constraints into the resource allocation process, including explicit identification of the cost of equity.
COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, H. B.; Marathe, M. V.; Stearns, R. E.
2001-01-01
Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity ormore » efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97]« less
Learning-based stochastic object models for characterizing anatomical variations
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua
2018-03-01
It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
Cosmic Ray Propagation through the Magnetic Fields of the Galaxy with Extended Halo
NASA Technical Reports Server (NTRS)
Zhang, Ming
2005-01-01
In this project we perform theoretical studies of 3-dimensional cosmic ray propagation in magnetic field configurations of the Galaxy with an extended halo. We employ our newly developed Markov stochastic process methods to solve the diffusive cosmic ray transport equation. We seek to understand observations of cosmic ray spectra, composition under the constraints of the observations of diffuse gamma ray and radio emission from the Galaxy. The model parameters are directly are related to properties of our Galaxy, such as the size of the Galactic halo, particle transport in Galactic magnetic fields, distribution of interstellar gas, primary cosmic ray source distribution and their confinement in the Galaxy. The core of this investigation is the development of software for cosmic ray propagation models with the Markov stochastic process approach. Values of important model parameters for the halo diffusion model are examined in comparison with observations of cosmic ray spectra, composition and the diffuse gamma-ray background. This report summarizes our achievement in the grant period at the Florida Institute of Technology. Work at the co-investigator's institution, the University of New Hampshire, under a companion grant, will be covered in detail by a separate report.
Modelling the protocol stack in NCS with deterministic and stochastic petri net
NASA Astrophysics Data System (ADS)
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
Stochastic transport models for mixing in variable-density turbulence
NASA Astrophysics Data System (ADS)
Bakosi, J.; Ristorcelli, J. R.
2011-11-01
In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.
Optimal causal inference: estimating stored information and approximating causal architecture.
Still, Susanne; Crutchfield, James P; Ellison, Christopher J
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.