Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
Political model of social evolution
Acemoglu, Daron; Egorov, Georgy; Sonin, Konstantin
2011-01-01
Almost all democratic societies evolved socially and politically out of authoritarian and nondemocratic regimes. These changes not only altered the allocation of economic resources in society but also the structure of political power. In this paper, we develop a framework for studying the dynamics of political and social change. The society consists of agents that care about current and future social arrangements and economic allocations; allocation of political power determines who has the capacity to implement changes in economic allocations and future allocations of power. The set of available social rules and allocations at any point in time is stochastic. We show that political and social change may happen without any stochastic shocks or as a result of a shock destabilizing an otherwise stable social arrangement. Crucially, the process of social change is contingent (and history-dependent): the timing and sequence of stochastic events determine the long-run equilibrium social arrangements. For example, the extent of democratization may depend on how early uncertainty about the set of feasible reforms in the future is resolved. PMID:22198760
Political model of social evolution.
Acemoglu, Daron; Egorov, Georgy; Sonin, Konstantin
2011-12-27
Almost all democratic societies evolved socially and politically out of authoritarian and nondemocratic regimes. These changes not only altered the allocation of economic resources in society but also the structure of political power. In this paper, we develop a framework for studying the dynamics of political and social change. The society consists of agents that care about current and future social arrangements and economic allocations; allocation of political power determines who has the capacity to implement changes in economic allocations and future allocations of power. The set of available social rules and allocations at any point in time is stochastic. We show that political and social change may happen without any stochastic shocks or as a result of a shock destabilizing an otherwise stable social arrangement. Crucially, the process of social change is contingent (and history-dependent): the timing and sequence of stochastic events determine the long-run equilibrium social arrangements. For example, the extent of democratization may depend on how early uncertainty about the set of feasible reforms in the future is resolved.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung
2017-04-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung
2017-01-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
Stochastic quantization of conformally coupled scalar in AdS
NASA Astrophysics Data System (ADS)
Jatkar, Dileep P.; Oh, Jae-Hyuk
2013-10-01
We explore the relation between stochastic quantization and holographic Wilsonian renormalization group flow further by studying conformally coupled scalar in AdS d+1. We establish one to one mapping between the radial flow of its double trace deformation and stochastic 2-point correlation function. This map is shown to be identical, up to a suitable field re-definition of the bulk scalar, to the original proposal in arXiv:1209.2242.
A supplier selection and order allocation problem with stochastic demands
NASA Astrophysics Data System (ADS)
Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua
2011-08-01
We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.
Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios
2012-01-01
The fair division of a surplus is one of the most widely examined problems. This paper focuses on bargaining problems with fixed disagreement payoffs where risk-neutral agents have reached an agreement that is the Nash-bargaining solution (NBS). We consider a stochastic environment, in which the overall return consists of multiple pies with uncertain sizes and we examine how these pies can be allocated with fairness among agents. Specifically, fairness is based on the Aristotle’s maxim: “equals should be treated equally and unequals unequally, in proportion to the relevant inequality”. In this context, fairness is achieved when all the individual stochastic surplus shares which are allocated to agents are distributed in proportion to the NBS. We introduce a novel algorithm, which can be used to compute the ratio of each pie that should be allocated to each agent, in order to ensure fairness within a symmetric or asymmetric NBS. PMID:23024752
Water resources planning and management : A stochastic dual dynamic programming approach
NASA Astrophysics Data System (ADS)
Goor, Q.; Pinte, D.; Tilmant, A.
2008-12-01
Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.
Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L
2015-02-01
Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
Didactic discussion of stochastic resonance effects and weak signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adair, R.K.
1996-12-01
A simple, paradigmatic, model is used to illustrate some general properties of effects subsumed under the label stochastic resonance. In particular, analyses of the transparent model show that (1) a small amount of noise added to a much larger signal can greatly increase the response to the signal, but (2) a weak signal added to much larger noise will not generate a substantial added response. The conclusions drawn from the model illustrate the general result that stochastic resonance effects do not provide an avenue for signals that are much smaller than noise to affect biology. A further analysis demonstrates themore » effects of small signals in the shifting of biologically important chemical equilibria under conditions where stochastic resonance effects are significant.« less
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm
1978-09-01
deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research
Stochastic Optimization For Water Resources Allocation
NASA Astrophysics Data System (ADS)
Yamout, G.; Hatfield, K.
2003-12-01
For more than 40 years, water resources allocation problems have been addressed using deterministic mathematical optimization. When data uncertainties exist, these methods could lead to solutions that are sub-optimal or even infeasible. While optimization models have been proposed for water resources decision-making under uncertainty, no attempts have been made to address the uncertainties in water allocation problems in an integrated approach. This paper presents an Integrated Dynamic, Multi-stage, Feedback-controlled, Linear, Stochastic, and Distributed parameter optimization approach to solve a problem of water resources allocation. It attempts to capture (1) the conflict caused by competing objectives, (2) environmental degradation produced by resource consumption, and finally (3) the uncertainty and risk generated by the inherently random nature of state and decision parameters involved in such a problem. A theoretical system is defined throughout its different elements. These elements consisting mainly of water resource components and end-users are described in terms of quantity, quality, and present and future associated risks and uncertainties. Models are identified, modified, and interfaced together to constitute an integrated water allocation optimization framework. This effort is a novel approach to confront the water allocation optimization problem while accounting for uncertainties associated with all its elements; thus resulting in a solution that correctly reflects the physical problem in hand.
Simulation-based planning for theater air warfare
NASA Astrophysics Data System (ADS)
Popken, Douglas A.; Cox, Louis A., Jr.
2004-08-01
Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.
NASA Astrophysics Data System (ADS)
Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.
2018-06-01
A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.
Sensory Optimization by Stochastic Tuning
Jurica, Peter; Gepshtein, Sergei; Tyukin, Ivan; van Leeuwen, Cees
2013-01-01
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system’s preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit, and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: the higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics, and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. PMID:24219849
Strategic planning for disaster recovery with stochastic last mile distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell Whitford; Van Hentenryck, Pascal; Coffrin, Carleton
2010-01-01
This paper considers the single commodity allocation problem (SCAP) for disaster recovery, a fundamental problem faced by all populated areas. SCAPs are complex stochastic optimization problems that combine resource allocation, warehouse routing, and parallel fleet routing. Moreover, these problems must be solved under tight runtime constraints to be practical in real-world disaster situations. This paper formalizes the specification of SCAPs and introduces a novel multi-stage hybrid-optimization algorithm that utilizes the strengths of mixed integer programming, constraint programming, and large neighborhood search. The algorithm was validated on hurricane disaster scenarios generated by Los Alamos National Laboratory using state-of-the-art disaster simulation toolsmore » and is deployed to aid federal organizations in the US.« less
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization
Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources. PMID:29104748
Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization.
Lin, Carrie Ka Yuk; Ling, Teresa Wai Ching; Yeung, Wing Kwan
2017-01-01
This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources.
Allocating resources to large wildland fires: a model with stochastic production rates
Romain Mees; David Strauss
1992-01-01
Wildland fires that grow out of the initial attack phase are responsible for most of the damage and burned area. We model the allocation of fire suppression resources (ground crews, engines, bulldozers, and airdrops) to these large fires. The fireline at a given future time is partitioned into homogeneous segments on the basis of fuel type, available resources, risk,...
Enhancements and Algorithms for Avionic Information Processing System Design Methodology.
1982-06-16
programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP
Resource allocation for wildland fire suppression planning using a stochastic program
Alex Taylor Masarie
2011-01-01
Resource allocation for wildland fire suppression problems, referred to here as Fire-S problems, have been studied for over a century. Not only have the many variants of the base Fire-S problem made it such a durable one to study, but advances in suppression technology and our ever-expanding knowledge of and experience with wildland fire behavior have required almost...
A robust optimisation approach to the problem of supplier selection and allocation in outsourcing
NASA Astrophysics Data System (ADS)
Fu, Yelin; Keung Lai, Kin; Liang, Liang
2016-03-01
We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.
Interactive two-stage stochastic fuzzy programming for water resources management.
Wang, S; Huang, G H
2011-08-01
In this study, an interactive two-stage stochastic fuzzy programming (ITSFP) approach has been developed through incorporating an interactive fuzzy resolution (IFR) method within an inexact two-stage stochastic programming (ITSP) framework. ITSFP can not only tackle dual uncertainties presented as fuzzy boundary intervals that exist in the objective function and the left- and right-hand sides of constraints, but also permit in-depth analyses of various policy scenarios that are associated with different levels of economic penalties when the promised policy targets are violated. A management problem in terms of water resources allocation has been studied to illustrate applicability of the proposed approach. The results indicate that a set of solutions under different feasibility degrees has been generated for planning the water resources allocation. They can help the decision makers (DMs) to conduct in-depth analyses of tradeoffs between economic efficiency and constraint-violation risk, as well as enable them to identify, in an interactive way, a desired compromise between satisfaction degree of the goal and feasibility of the constraints (i.e., risk of constraint violation). Copyright © 2011 Elsevier Ltd. All rights reserved.
Sensory optimization by stochastic tuning.
Jurica, Peter; Gepshtein, Sergei; Tyukin, Ivan; van Leeuwen, Cees
2013-10-01
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system's preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: The higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Stochastic does not equal ad hoc. [theories of lunar origin
NASA Technical Reports Server (NTRS)
Hartmann, W. K.
1984-01-01
Some classes of influential events in solar system history are class-predictable but not event-predictable. Theories of lunar origin should not ignore class-predictable stochastic events. Impacts and close encounters with large objects during planet formation are class-predictable. These stochastic events, such as large impacts that triggered ejection of Earth-mantle material into a circum-Earth cloud, should not be rejected as ad hoc. A way to deal with such events scientifically is to investigate their consequences; if it can be shown that they might produce the Moon, they become viable concepts in theories of lunar origin.
Artificial Neural Network Metamodels of Stochastic Computer Simulations
1994-08-10
SUBTITLE r 5. FUNDING NUMBERS Artificial Neural Network Metamodels of Stochastic I () Computer Simulations 6. AUTHOR(S) AD- A285 951 Robert Allen...8217!298*1C2 ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC COMPUTER SIMULATIONS by Robert Allen Kilmer B.S. in Education Mathematics, Indiana...dedicate this document to the memory of my father, William Ralph Kilmer. mi ABSTRACT Signature ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC
Schreiber, Sebastian J; Rosenheim, Jay A; Williams, Neal W; Harder, Lawrence D
2015-01-01
Variation in resource availability can select for traits that reduce the negative impacts of this variability on mean fitness. Such selection may be particularly potent for seed production in flowering plants, as they often experience variation in pollen receipt among individuals and among flowers within individuals. Using analytically tractable models, we examine the optimal allocations for producing ovules, attracting pollen, and maturing seeds in deterministic and stochastic pollen environments. In deterministic environments, the optimal strategy attracts sufficient pollen to fertilize every ovule and mature every zygote into a seed. Stochastic environments select for allocations proportional to the risk of seed production being limited by zygotes or seed maturation. When producing an ovule is cheap and maturing a seed is expensive, among-plant variation selects for attracting more pollen at the expense of producing fewer ovules and having fewer resources for seed maturation. Despite this increased allocation, such populations are likely to be pollen limited. In contrast, within-plant variation generally selects for an overproduction of ovules and, to a lesser extent, pollen attraction. Such populations are likely to be resource limited and exhibit low seed-to-ovule ratios. These results highlight the importance of multiscale variation in the evolution and ecology of resource allocations.
Assessing marginal water values in multipurpose multireservoir systems via stochastic programming
NASA Astrophysics Data System (ADS)
Tilmant, A.; Pinte, D.; Goor, Q.
2008-12-01
The International Conference on Water and the Environment held in Dublin in 1992 emphasized the need to consider water as an economic good. Since water markets are usually absent or ineffective, the value of water cannot be directly derived from market activities but must rather be assessed through shadow prices. Economists have developed various valuation techniques to determine the economic value of water, especially to handle allocation issues involving environmental water uses. Most of the nonmarket valuation studies reported in the literature focus on long-run policy problems, such as permanent (re)allocations of water, and assume that the water availability is given. When dealing with short-run allocation problems, water managers are facing complex spatial and temporal trade-offs and must therefore be able to track site and time changes in water values across different hydrologic conditions, especially in arid and semiarid areas where the availability of water is a limiting and stochastic factor. This paper presents a stochastic programming approach for assessing the statistical distribution of marginal water values in multipurpose multireservoir systems where hydropower generation and irrigation crop production are the main economic activities depending on water. In the absence of a water market, the Lagrange multipliers correspond to shadow prices, and the marginal water values are the Lagrange multipliers associated with the mass balance equations of the reservoirs. The methodology is illustrated with a cascade of hydroelectric-irrigation reservoirs in the Euphrates river basin in Turkey and Syria.
Uncertainty-accounting environmental policy and management of water systems.
Baresel, Christian; Destouni, Georgia
2007-05-15
Environmental policies for water quality and ecosystem management do not commonly require explicit stochastic accounts of uncertainty and risk associated with the quantification and prediction of waterborne pollutant loads and abatement effects. In this study, we formulate and investigate a possible environmental policy that does require an explicit stochastic uncertainty account. We compare both the environmental and economic resource allocation performance of such an uncertainty-accounting environmental policy with that of deterministic, risk-prone and risk-averse environmental policies under a range of different hypothetical, yet still possible, scenarios. The comparison indicates that a stochastic uncertainty-accounting policy may perform better than deterministic policies over a range of different scenarios. Even in the absence of reliable site-specific data, reported literature values appear to be useful for such a stochastic account of uncertainty.
Enhancing robustness of interdependent network by adding connectivity and dependence links
NASA Astrophysics Data System (ADS)
Cui, Pengshuai; Zhu, Peidong; Wang, Ke; Xun, Peng; Xia, Zhuoqun
2018-05-01
Enhancing robustness of interdependent networks by adding connectivity links has been researched extensively, however, few of them are focusing on adding both connectivity and dependence links to enhance robustness. In this paper, we aim to study how to allocate the limited costs reasonably to add both connectivity and dependence links. Firstly, we divide the attackers into stubborn attackers and smart attackers according to whether would they change their attack modes with the changing of network structure; Then by simulations, link addition strategies are given separately according to different attackers, with which we can allocate the limited costs to add connectivity links and dependence links reasonably and achieve more robustness than only adding connectivity links or dependence links. The results show that compared to only adding connectivity links or dependence links, allocating the limited resources reasonably and adding both connectivity links and dependence links could bring more robustness to the interdependent networks.
Optimized maritime emergency resource allocation under dynamic demand.
Zhang, Wenfen; Yan, Xinping; Yang, Jiaqi
2017-01-01
Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand.
Optimized maritime emergency resource allocation under dynamic demand
Yan, Xinping; Yang, Jiaqi
2017-01-01
Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand. PMID:29240792
Some Stochastic-Duel Models of Combat.
1983-03-01
AD-R127 879 SOME STOCHASTIC- DUEL MODELS OF CONBAT(U) NAVAL - / POSTGRADUATE SCHOOL MONTEREY CA J S CHOE MAR 83 UNCLASSiIED FC1/Ehhh1; F/ 12/ ,iE...SCHOOL Monterey, California DTIC ELECTE :MAY 10 1983 "T !H ES IS SOME STOCHASTIC- DUEL MODELS OF COMBAT by Jum Soo Choe March 1983 Thesis Advisor: J. G...TYPE OF RETORT a PERIOD COVIOCe Master’s Thesis Some Stochastic- Duel Models of Combat March 1983 S. PERFORINGi *no. 44POOi umet 7. AUTHORW.) a
The Heterogeneous Investment Horizon and Dynamic Strategies for Asset Allocation
NASA Astrophysics Data System (ADS)
Xiong, Heping; Xu, Yiheng; Xiao, Yi
This paper discusses the influence of the portfolio rebalancing strategy on the efficiency of long-term investment portfolios under the assumption of independent stationary distribution of returns. By comparing the efficient sets of the stochastic rebalancing strategy, the simple rebalancing strategy and the buy-and-hold strategy with specific data examples, we find that the stochastic rebalancing strategy is optimal, while the simple rebalancing strategy is of the lowest efficiency. In addition, the simple rebalancing strategy lowers the efficiency of the portfolio instead of improving it.
The internalist perspective on inevitable arbitrage in financial markets
NASA Astrophysics Data System (ADS)
Matsuno, Koichiro
2003-06-01
Arbitrage as an inevitable component of financial markets is due to the robust interplay between the continuous and the discontinuous stochastic variables appearing in the underlying dynamics. We present empirical evidence of such an arbitrage through the laboratory experiment on a portfolio management in the Japan-United States financial markets over the last several years, under the condition that the asset allocation was updated every day over the entire period. The portfolio management addressing the foreign exchange, the stock, and the bond markets was accomplished as referring to and processing only those empirical data that have been complied by and made available from the monetary authorities and the relevant financial markets so far. The averaged annual yield of the portfolio counted in the denomination of US currency was slightly greater than the averaged yield of the same physical assets counted in the denomination of Japanese currency, indicating the occurrence of arbitrage pricing in the financial markets. Daily update of asset allocation was conducted as referring to the predictive movement internal to the dynamics such that monetary flow variables, that are discontinuously stochastic upon the act of measurement internal to the markets, generate monetary stock variables that turn out to be both continuously stochastic and robust in the effect.
Tondjo, Kodjo; Brancheriau, Loïc; Sabatier, Sylvie; Kokutse, Adzo Dzifa; Kokou, Kouami; Jaeger, Marc; de Reffye, Philippe; Fourcaud, Thierry
2018-06-08
For a given genotype, the observed variability of tree forms results from the stochasticity of meristem functioning and from changing and heterogeneous environmental factors affecting biomass formation and allocation. In response to climate change, trees adapt their architecture by adjusting growth processes such as pre- and neoformation, as well as polycyclic growth. This is the case for the teak tree. The aim of this work was to adapt the plant model, GreenLab, in order to take into consideration both these processes using existing data on this tree species. This work adopted GreenLab formalism based on source-sink relationships at organ level that drive biomass production and partitioning within the whole plant over time. The stochastic aspect of phytomer production can be modelled by a Bernoulli process. The teak model was designed, parameterized and analysed using the architectural data from 2- to 5-year-old teak trees in open field stands. Growth and development parameters were identified, fitting the observed compound organic series with the theoretical series, using generalized least squares methods. Phytomer distributions of growth units and branching pattern varied depending on their axis category, i.e. their physiological age. These emerging properties were in accordance with the observed growth patterns and biomass allocation dynamics during a growing season marked by a short dry season. Annual growth patterns observed on teak, including shoot pre- and neoformation and polycyclism, were reproduced by the new version of the GreenLab model. However, further updating is discussed in order to ensure better consideration of radial variation in basic specific gravity of wood. Such upgrading of the model will enable teak ideotypes to be defined for improving wood production in terms of both volume and quality.
Multithreaded Stochastic PDES for Reactions and Diffusions in Neurons.
Lin, Zhongwei; Tropper, Carl; Mcdougal, Robert A; Patoary, Mohammand Nazrul Ishlam; Lytton, William W; Yao, Yiping; Hines, Michael L
2017-07-01
Cells exhibit stochastic behavior when the number of molecules is small. Hence a stochastic reaction-diffusion simulator capable of working at scale can provide a more accurate view of molecular dynamics within the cell. This paper describes a parallel discrete event simulator, Neuron Time Warp-Multi Thread (NTW-MT), developed for the simulation of reaction diffusion models of neurons. To the best of our knowledge, this is the first parallel discrete event simulator oriented towards stochastic simulation of chemical reactions in a neuron. The simulator was developed as part of the NEURON project. NTW-MT is optimistic and thread-based, which attempts to capitalize on multi-core architectures used in high performance machines. It makes use of a multi-level queue for the pending event set and a single roll-back message in place of individual anti-messages to disperse contention and decrease the overhead of processing rollbacks. Global Virtual Time is computed asynchronously both within and among processes to get rid of the overhead for synchronizing threads. Memory usage is managed in order to avoid locking and unlocking when allocating and de-allocating memory and to maximize cache locality. We verified our simulator on a calcium buffer model. We examined its performance on a calcium wave model, comparing it to the performance of a process based optimistic simulator and a threaded simulator which uses a single priority queue for each thread. Our multi-threaded simulator is shown to achieve superior performance to these simulators. Finally, we demonstrated the scalability of our simulator on a larger CICR model and a more detailed CICR model.
NASA Astrophysics Data System (ADS)
Panda, Satyasen
2018-05-01
This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Tavakoli, Ali; Nikoo, Mohammad Reza; Kerachian, Reza; Soltani, Maryam
2015-04-01
In this paper, a new fuzzy methodology is developed to optimize water and waste load allocation (WWLA) in rivers under uncertainty. An interactive two-stage stochastic fuzzy programming (ITSFP) method is utilized to handle parameter uncertainties, which are expressed as fuzzy boundary intervals. An iterative linear programming (ILP) is also used for solving the nonlinear optimization model. To accurately consider the impacts of the water and waste load allocation strategies on the river water quality, a calibrated QUAL2Kw model is linked with the WWLA optimization model. The soil, water, atmosphere, and plant (SWAP) simulation model is utilized to determine the quantity and quality of each agricultural return flow. To control pollution loads of agricultural networks, it is assumed that a part of each agricultural return flow can be diverted to an evaporation pond and also another part of it can be stored in a detention pond. In detention ponds, contaminated water is exposed to solar radiation for disinfecting pathogens. Results of applying the proposed methodology to the Dez River system in the southwestern region of Iran illustrate its effectiveness and applicability for water and waste load allocation in rivers. In the planning phase, this methodology can be used for estimating the capacities of return flow diversion system and evaporation and detention ponds.
Finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems
NASA Astrophysics Data System (ADS)
Xie, Xue-Jun; Zhang, Xing-Hui; Zhang, Kemei
2016-07-01
This paper studies the finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems. Based on the stochastic Lyapunov theorem on finite-time stability, by using the homogeneous domination method, the adding one power integrator and sign function method, constructing a ? Lyapunov function and verifying the existence and uniqueness of solution, a continuous state feedback controller is designed to guarantee the closed-loop system finite-time stable in probability.
Impact of deterministic and stochastic updates on network reciprocity in the prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2014-08-01
In 2 × 2 prisoner's dilemma games, network reciprocity is one mechanism for adding social viscosity, which leads to cooperative equilibrium. This study introduced an intriguing framework for the strategy update rule that allows any combination of a purely deterministic method, imitation max (IM), and a purely probabilistic one, pairwise Fermi (Fermi-PW). A series of simulations covering the whole range from IM to Fermi-PW reveals that, as a general tendency, the larger fractions of stochastic updating reduce network reciprocity, so long as the underlying lattice contains no noise in the degree of distribution. However, a small amount of stochastic flavor added to an otherwise perfectly deterministic update rule was actually found to enhance network reciprocity. This occurs because a subtle stochastic effect in the update rule improves the evolutionary trail in games having more stag-hunt-type dilemmas, although the same stochastic effect degenerates evolutionary trails in games having more chicken-type dilemmas. We explain these effects by dividing evolutionary trails into the enduring and expanding periods defined by Shigaki et al. [Phys. Rev. E 86, 031141 (2012), 10.1103/PhysRevE.86.031141].
Skinner, James E; Meyer, Michael; Nester, Brian A; Geary, Una; Taggart, Pamela; Mangione, Antoinette; Ramalanjaona, George; Terregino, Carol; Dalsey, William C
2009-01-01
Objective: Comparative algorithmic evaluation of heartbeat series in low-to-high risk cardiac patients for the prospective prediction of risk of arrhythmic death (AD). Background: Heartbeat variation reflects cardiac autonomic function and risk of AD. Indices based on linear stochastic models are independent risk factors for AD in post-myocardial infarction (post-MI) cohorts. Indices based on nonlinear deterministic models have superior predictability in retrospective data. Methods: Patients were enrolled (N = 397) in three emergency departments upon presenting with chest pain and were determined to be at low-to-high risk of acute MI (>7%). Brief ECGs were recorded (15 min) and R-R intervals assessed by three nonlinear algorithms (PD2i, DFA, and ApEn) and four conventional linear-stochastic measures (SDNN, MNN, 1/f-Slope, LF/HF). Out-of-hospital AD was determined by modified Hinkle–Thaler criteria. Results: All-cause mortality at one-year follow-up was 10.3%, with 7.7% adjudicated to be AD. The sensitivity and relative risk for predicting AD was highest at all time-points for the nonlinear PD2i algorithm (p ≤0.001). The sensitivity at 30 days was 100%, specificity 58%, and relative risk >100 (p ≤0.001); sensitivity at 360 days was 95%, specificity 58%, and relative risk >11.4 (p ≤0.001). Conclusions: Heartbeat analysis by the time-dependent nonlinear PD2i algorithm is comparatively the superior test. PMID:19707283
USDA-ARS?s Scientific Manuscript database
When Lagrangian stochastic models for turbulent dispersion are applied to complex flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behavior in the numerical solution. This paper discusses numerical considerations when solving the Langevin-based particle velo...
Stochastic Ocean Eddy Perturbations in a Coupled General Circulation Model.
NASA Astrophysics Data System (ADS)
Howe, N.; Williams, P. D.; Gregory, J. M.; Smith, R. S.
2014-12-01
High-resolution ocean models, which are eddy permitting and resolving, require large computing resources to produce centuries worth of data. Also, some previous studies have suggested that increasing resolution does not necessarily solve the problem of unresolved scales, because it simply introduces a new set of unresolved scales. Applying stochastic parameterisations to ocean models is one solution that is expected to improve the representation of small-scale (eddy) effects without increasing run-time. Stochastic parameterisation has been shown to have an impact in atmosphere-only models and idealised ocean models, but has not previously been studied in ocean general circulation models. Here we apply simple stochastic perturbations to the ocean temperature and salinity tendencies in the low-resolution coupled climate model, FAMOUS. The stochastic perturbations are implemented according to T(t) = T(t-1) + (ΔT(t) + ξ(t)), where T is temperature or salinity, ΔT is the corresponding deterministic increment in one time step, and ξ(t) is Gaussian noise. We use high-resolution HiGEM data coarse-grained to the FAMOUS grid to provide information about the magnitude and spatio-temporal correlation structure of the noise to be added to the lower resolution model. Here we present results of adding white and red noise, showing the impacts of an additive stochastic perturbation on mean climate state and variability in an AOGCM.
Computer software tool REALM for sustainable water allocation and management.
Perera, B J C; James, B; Kularathna, M D U
2005-12-01
REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.
26 CFR 1.338-7 - Allocation of redetermined ADSP and AGUB among target assets.
Code of Federal Regulations, 2011 CFR
2011-04-01
... TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Effects on Corporation § 1.338-7 Allocation of... such amount as an increase or decrease would be required under general principles of tax law for the... original allocation to it, the difference is added to or subtracted from the original allocation to the...
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
Experimental evidence of a risk-sensitive reproductive allocation in a long-lived mammal.
Bårdsen, Bard-Jørgen; Fauchald, Per; Tveraa, Torkild; Langeland, Knut; Yoccoz, Nigel Gilles; Ims, Rolf Anker
2008-03-01
When reproduction competes with the amount of resources available for survival during an unpredictable nonbreeding season, individuals should adopt a risk-sensitive regulation of their reproductive allocation. We tested this hypothesis on female reindeer (Rangifer tarandus), which face a trade-off between reproduction and acquisition of body reserves during spring and summer, with autumn body mass functioning as insurance against stochastic winter climatic severity. The study was conducted in a population consisting of two herds: one that received supplementary winter feeding for four years while the other utilized natural pastures. The females receiving additional forage allocated more to their calves. Experimental translocation of females between the herds was conducted to simulate two contrasting rapid alterations of winter conditions. When females receiving supplementary feeding were moved to natural pastures, they promptly reduced their reproductive allocation the following summer. However, when winter conditions were improved, females were reluctant to increase their reproductive allocation. This asymmetric response to improved vs. reduced winter conditions is consistent with a risk-averse adjustment in reproductive allocation. The ability of individuals to track their environment and the concordant risk-sensitive adjustment of reproductive allocation may render subarctic reindeer more resilient to climate change than previously supposed.
Analytical pricing formulas for hybrid variance swaps with regime-switching
NASA Astrophysics Data System (ADS)
Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun
2017-11-01
The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.
Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C
2017-06-01
The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.
Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrari, Giorgio, E-mail: giorgio.ferrari@uni-bielefeld.de; Riedel, Frank, E-mail: frank.riedel@uni-bielefeld.de; Steg, Jan-Henrik, E-mail: jsteg@uni-bielefeld.de
In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochasticmore » Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.« less
Leveraging human decision making through the optimal management of centralized resources
NASA Astrophysics Data System (ADS)
Hyden, Paul; McGrath, Richard G.
2016-05-01
Combining results from mixed integer optimization, stochastic modeling and queuing theory, we will advance the interdisciplinary problem of efficiently and effectively allocating centrally managed resources. Academia currently fails to address this, as the esoteric demands of each of these large research areas limits work across traditional boundaries. The commercial space does not currently address these challenges due to the absence of a profit metric. By constructing algorithms that explicitly use inputs across boundaries, we are able to incorporate the advantages of using human decision makers. Key improvements in the underlying algorithms are made possible by aligning decision maker goals with the feedback loops introduced between the core optimization step and the modeling of the overall stochastic process of supply and demand. A key observation is that human decision-makers must be explicitly included in the analysis for these approaches to be ultimately successful. Transformative access gives warfighters and mission owners greater understanding of global needs and allows for relationships to guide optimal resource allocation decisions. Mastery of demand processes and optimization bottlenecks reveals long term maximum marginal utility gaps in capabilities.
NASA Astrophysics Data System (ADS)
Cardoso, T.; Oliveira, M. D.; Barbosa-Póvoa, A.; Nickel, S.
2015-05-01
Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment.
NASA Astrophysics Data System (ADS)
Sun, Zhi-Yuan; Gao, Yi-Tian; Yu, Xin; Liu, Ying
2012-12-01
We investigate the dynamics of the bound vector solitons (BVSs) for the coupled nonlinear Schrödinger equations with the nonhomogenously stochastic perturbations added on their dispersion terms. Soliton switching (besides soliton breakup) can be observed between the two components of the BVSs. Rate of the maximum switched energy (absolute values) within the fixed propagation distance (about 10 periods of the BVSs) enhances in the sense of statistics when the amplitudes of stochastic perturbations increase. Additionally, it is revealed that the BVSs with enhanced coherence are more robust against the perturbations with nonhomogenous stochasticity. Diagram describing the approximate borders of the splitting and non-splitting areas is also given. Our results might be helpful in dynamics of the BVSs with stochastic noises in nonlinear optical fibers or with stochastic quantum fluctuations in Bose-Einstein condensates.
Sun, Zhi-Yuan; Gao, Yi-Tian; Yu, Xin; Liu, Ying
2012-12-01
We investigate the dynamics of the bound vector solitons (BVSs) for the coupled nonlinear Schrödinger equations with the nonhomogenously stochastic perturbations added on their dispersion terms. Soliton switching (besides soliton breakup) can be observed between the two components of the BVSs. Rate of the maximum switched energy (absolute values) within the fixed propagation distance (about 10 periods of the BVSs) enhances in the sense of statistics when the amplitudes of stochastic perturbations increase. Additionally, it is revealed that the BVSs with enhanced coherence are more robust against the perturbations with nonhomogenous stochasticity. Diagram describing the approximate borders of the splitting and non-splitting areas is also given. Our results might be helpful in dynamics of the BVSs with stochastic noises in nonlinear optical fibers or with stochastic quantum fluctuations in Bose-Einstein condensates.
Dynamic fair node spectrum allocation for ad hoc networks using random matrices
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Lemieux, George; Chester, Dave; Sonnenberg, Jerry
2015-05-01
Dynamic Spectrum Access (DSA) is widely seen as a solution to the problem of limited spectrum, because of its ability to adapt the operating frequency of a radio. Mobile Ad Hoc Networks (MANETs) can extend high-capacity mobile communications over large areas where fixed and tethered-mobile systems are not available. In one use case with high potential impact, cognitive radio employs spectrum sensing to facilitate the identification of allocated frequencies not currently accessed by their primary users. Primary users own the rights to radiate at a specific frequency and geographic location, while secondary users opportunistically attempt to radiate at a specific frequency when the primary user is not using it. We populate a spatial radio environment map (REM) database with known information that can be leveraged in an ad hoc network to facilitate fair path use of the DSA-discovered links. Utilization of high-resolution geospatial data layers in RF propagation analysis is directly applicable. Random matrix theory (RMT) is useful in simulating network layer usage in nodes by a Wishart adjacency matrix. We use the Dijkstra algorithm for discovering ad hoc network node connection patterns. We present a method for analysts to dynamically allocate node-node path and link resources using fair division. User allocation of limited resources as a function of time must be dynamic and based on system fairness policies. The context of fair means that first available request for an asset is not envied as long as it is not yet allocated or tasked in order to prevent cycling of the system. This solution may also save money by offering a Pareto efficient repeatable process. We use a water fill queue algorithm to include Shapley value marginal contributions for allocation.
26 CFR 1.861-10 - Special allocations of interest expense.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... In addition, assets which are the subject of qualified nonrecourse indebtedness or integrated... 26 Internal Revenue 9 2010-04-01 2010-04-01 false Special allocations of interest expense. 1.861... § 1.861-10 Special allocations of interest expense. (a)-(d) [Reserved] (e) Treatment of certain...
Stochastic optimisation of water allocation on a global scale
NASA Astrophysics Data System (ADS)
Schmitz, Oliver; Straatsma, Menno; Karssenberg, Derek; Bierkens, Marc F. P.
2014-05-01
Climate change, increasing population and further economic developments are expected to increase water scarcity for many regions of the world. Optimal water management strategies are required to minimise the water gap between water supply and domestic, industrial and agricultural water demand. A crucial aspect of water allocation is the spatial scale of optimisation. Blue water supply peaks at the upstream parts of large catchments, whereas demands are often largest at the industrialised downstream parts. Two extremes exist in water allocation: (i) 'First come, first serve,' which allows the upstream water demands to be fulfilled without considerations of downstream demands, and (ii) 'All for one, one for all' that satisfies water allocation over the whole catchment. In practice, water treaties govern intermediate solutions. The objective of this study is to determine the effect of these two end members on water allocation optimisation with respect to water scarcity. We conduct this study on a global scale with the year 2100 as temporal horizon. Water supply is calculated using the hydrological model PCR-GLOBWB, operating at a 5 arcminutes resolution and a daily time step. PCR-GLOBWB is forced with temperature and precipitation fields from the Hadgem2-ES global circulation model that participated in the latest coupled model intercomparison project (CMIP5). Water demands are calculated for representative concentration pathway 6.0 (RCP 6.0) and shared socio-economic pathway scenario 2 (SSP2). To enable the fast computation of the optimisation, we developed a hydrologically correct network of 1800 basin segments with an average size of 100 000 square kilometres. The maximum number of nodes in a network was 140 for the Amazon Basin. Water demands and supplies are aggregated to cubic kilometres per month per segment. A new open source implementation of the water allocation is developed for the stochastic optimisation of the water allocation. We apply a Genetic Algorithm for each segment to estimate the set of parameters that distribute the water supply for each node. We use the Python programming language and a flexible software architecture allowing to straightforwardly 1) exchange the process description for the nodes such that different water allocation schemes can be tested 2) exchange the objective function 3) apply the optimisation either to the whole catchment or to different sub-levels and 4) use multi-core CPUs concurrently and therefore reducing computation time. We demonstrate the application of the scientific workflow to the model outputs of PCR-GLOBWB and present first results on how water scarcity depends on the choice between the two extremes in water allocation.
An Asymptotic Stochastic View of Anticipation in a Noisy Duel (I).
1981-11-01
AD-Alit 955 IOWA UNIV IOWA CITEY DEPT OF STATISTICS F/0 12/1 AN ASYMPTOTIC STOCHASTIC VIEW OF ANTICIPATION IN A NOISY DUEL 4-ETCNO(l0RRYLYU) ELY AVD...N I. NDAR[ 1’ A AFOSR -TTZ- 0r ~ 9O0u7 C AN ASYMPTOTIC STOCHASTIC VIEW OF ANTICIPATION IN A NOISY DUEL (I)* Dan R. Royaltyt, J. Colby Kegley*, ’ H.T...David’, and R.W. Berger* Abstract. The noisy duel between two equally accurate duelists, possessing respectively 1 and 2 bullets, is viewed in the
Digital hardware implementation of a stochastic two-dimensional neuron model.
Grassia, F; Kohno, T; Levi, T
2016-11-01
This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parameter-induced stochastic resonance with a periodic signal
NASA Astrophysics Data System (ADS)
Li, Jian-Long; Xu, Bo-Hou
2006-12-01
In this paper conventional stochastic resonance (CSR) is realized by adding the noise intensity. This demonstrates that tuning the system parameters with fixed noise can make the noise play a constructive role and realize parameter-induced stochastic resonance (PSR). PSR can be interpreted as changing the intrinsic characteristic of the dynamical system to yield the cooperative effect between the stochastic-subjected nonlinear system and the external periodic force. This can be realized at any noise intensity, which greatly differs from CSR that is realized under the condition of the initial noise intensity not greater than the resonance level. Moreover, it is proved that PSR is different from the optimization of system parameters.
On square-wave-driven stochastic resonance for energy harvesting in a bistable system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Dongxu, E-mail: sudx@iis.u-tokyo.ac.jp; Zheng, Rencheng; Nakano, Kimihiko
Stochastic resonance is a physical phenomenon through which the throughput of energy within an oscillator excited by a stochastic source can be boosted by adding a small modulating excitation. This study investigates the feasibility of implementing square-wave-driven stochastic resonance to enhance energy harvesting. The motivating hypothesis was that such stochastic resonance can be efficiently realized in a bistable mechanism. However, the condition for the occurrence of stochastic resonance is conventionally defined by the Kramers rate. This definition is inadequate because of the necessity and difficulty in estimating white noise density. A bistable mechanism has been designed using an explicit analyticalmore » model which implies a new approach for achieving stochastic resonance in the paper. Experimental tests confirm that the addition of a small-scale force to the bistable system excited by a random signal apparently leads to a corresponding amplification of the response that we now term square-wave-driven stochastic resonance. The study therefore indicates that this approach may be a promising way to improve the performance of an energy harvester under certain forms of random excitation.« less
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
26 CFR 1.338-7 - Allocation of redetermined ADSP and AGUB among target assets.
Code of Federal Regulations, 2014 CFR
2014-04-01
... TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Effects on Corporation § 1.338-7... tax law for the elements of ADSP or AGUB. This section provides rules for allocating redetermined ADSP... different from the original allocation to it, the difference is added to or subtracted from the original...
26 CFR 1.338-7 - Allocation of redetermined ADSP and AGUB among target assets.
Code of Federal Regulations, 2012 CFR
2012-04-01
... TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (Continued) Effects on Corporation § 1.338-7... tax law for the elements of ADSP or AGUB. This section provides rules for allocating redetermined ADSP... different from the original allocation to it, the difference is added to or subtracted from the original...
26 CFR 1.338-7 - Allocation of redetermined ADSP and AGUB among target assets.
Code of Federal Regulations, 2013 CFR
2013-04-01
... TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Effects on Corporation § 1.338-7... tax law for the elements of ADSP or AGUB. This section provides rules for allocating redetermined ADSP... different from the original allocation to it, the difference is added to or subtracted from the original...
Portfolio choice in retirement: Health risk and the demand for annuities, housing, and risky assets*
Yogo, Motohiro
2016-01-01
In a life-cycle model, a retiree faces stochastic health depreciation and chooses consumption, health expenditure, and the allocation of wealth between bonds, stocks, and housing. The model explains key facts about asset allocation and health expenditure across health status and age. The portfolio share in stocks is low overall and is positively related to health, especially for younger retirees. The portfolio share in housing is negatively related to health for younger retirees and falls significantly in age. Finally, out-of-pocket health expenditure as a share of income is negatively related to health and rises in age. PMID:27766005
Portfolio choice in retirement: Health risk and the demand for annuities, housing, and risky assets.
Yogo, Motohiro
2016-06-01
In a life-cycle model, a retiree faces stochastic health depreciation and chooses consumption, health expenditure, and the allocation of wealth between bonds, stocks, and housing. The model explains key facts about asset allocation and health expenditure across health status and age. The portfolio share in stocks is low overall and is positively related to health, especially for younger retirees. The portfolio share in housing is negatively related to health for younger retirees and falls significantly in age. Finally, out-of-pocket health expenditure as a share of income is negatively related to health and rises in age.
Stochastic quantization and holographic Wilsonian renormalization group of free massive fermion
NASA Astrophysics Data System (ADS)
Moon, Sung Pil
2018-06-01
We examine a suggested relation between stochastic quantization and the holographic Wilsonian renormalization group in the massive fermion case on Euclidean AdS space. The original suggestion about the general relation between the two theories is posted in arXiv:1209.2242. In the previous researches, it is already verified that scalar fields, U(1) gauge fields, and massless fermions are consistent with the relation. In this paper, we examine the relation in the massive fermion case. Contrary to the other case, in the massive fermion case, the action needs particular boundary terms to satisfy boundary conditions. We finally confirm that the proposed suggestion is also valid in the massive fermion case.
Models for interrupted monitoring of a stochastic process
NASA Technical Reports Server (NTRS)
Palmer, E.
1977-01-01
As computers are added to the cockpit, the pilot's job is changing from of manually flying the aircraft, to one of supervising computers which are doing navigation, guidance and energy management calculations as well as automatically flying the aircraft. In this supervisorial role the pilot must divide his attention between monitoring the aircraft's performance and giving commands to the computer. Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies.
On some stochastic formulations and related statistical moments of pharmacokinetic models.
Matis, J H; Wehrly, T E; Metzler, C M
1983-02-01
This paper presents the deterministic and stochastic model for a linear compartment system with constant coefficients, and it develops expressions for the mean residence times (MRT) and the variances of the residence times (VRT) for the stochastic model. The expressions are relatively simple computationally, involving primarily matrix inversion, and they are elegant mathematically, in avoiding eigenvalue analysis and the complex domain. The MRT and VRT provide a set of new meaningful response measures for pharmacokinetic analysis and they give added insight into the system kinetics. The new analysis is illustrated with an example involving the cholesterol turnover in rats.
Harvesting wind energy to detect weak signals using mechanical stochastic resonance.
Breen, Barbara J; Rix, Jillian G; Ross, Samuel J; Yu, Yue; Lindner, John F; Mathewson, Nathan; Wainwright, Elliot R; Wilson, Ian
2016-12-01
Wind is free and ubiquitous and can be harnessed in multiple ways. We demonstrate mechanical stochastic resonance in a tabletop experiment in which wind energy is harvested to amplify weak periodic signals detected via the movement of an inverted pendulum. Unlike earlier mechanical stochastic resonance experiments, where noise was added via electrically driven vibrations, our broad-spectrum noise source is a single flapping flag. The regime of the experiment is readily accessible, with wind speeds ∼20 m/s and signal frequencies ∼1 Hz. We readily obtain signal-to-noise ratios on the order of 10 dB.
Deterministic and stochastic bifurcations in the Hindmarsh-Rose neuronal model
NASA Astrophysics Data System (ADS)
Dtchetgnia Djeundam, S. R.; Yamapi, R.; Kofane, T. C.; Aziz-Alaoui, M. A.
2013-09-01
We analyze the bifurcations occurring in the 3D Hindmarsh-Rose neuronal model with and without random signal. When under a sufficient stimulus, the neuron activity takes place; we observe various types of bifurcations that lead to chaotic transitions. Beside the equilibrium solutions and their stability, we also investigate the deterministic bifurcation. It appears that the neuronal activity consists of chaotic transitions between two periodic phases called bursting and spiking solutions. The stochastic bifurcation, defined as a sudden change in character of a stochastic attractor when the bifurcation parameter of the system passes through a critical value, or under certain condition as the collision of a stochastic attractor with a stochastic saddle, occurs when a random Gaussian signal is added. Our study reveals two kinds of stochastic bifurcation: the phenomenological bifurcation (P-bifurcations) and the dynamical bifurcation (D-bifurcations). The asymptotical method is used to analyze phenomenological bifurcation. We find that the neuronal activity of spiking and bursting chaos remains for finite values of the noise intensity.
NASA Astrophysics Data System (ADS)
Kim, Gi Young
The problem we investigate deals with an Image Intelligence (IMINT) sensor allocation schedule for High Altitude Long Endurance UAVs in a dynamic and Anti-Access Area Denial (A2AD) environment. The objective is to maximize the Situational Awareness (SA) of decision makers. The value of SA can be improved in two different ways. First, if a sensor allocated to an Areas of Interest (AOI) detects target activity, then the SA value will be increased. Second, the SA value increases if an AOI is monitored for a certain period of time, regardless of target detections. These values are functions of the sensor allocation time, sensor type and mode. Relatively few studies in the archival literature have been devoted to an analytic, detailed explanation of the target detection process, and AOI monitoring value dynamics. These two values are the fundamental criteria used to choose the most judicious sensor allocation schedule. This research presents mathematical expressions for target detection processes, and shows the monitoring value dynamics. Furthermore, the dynamics of target detection is the result of combined processes between belligerent behavior (target activity) and friendly behavior (sensor allocation). We investigate these combined processes and derive mathematical expressions for simplified cases. These closed form mathematical models can be used for Measures of Effectiveness (MOEs), i.e., target activity detection to evaluate sensor allocation schedules. We also verify these models with discrete event simulations which can also be used to describe more complex systems. We introduce several methodologies to achieve a judicious sensor allocation schedule focusing on the AOI monitoring value. The first methodology is a discrete time integer programming model which provides an optimal solution but is impractical for real world scenarios due to its computation time. Thus, it is necessary to trade off the quality of solution with computation time. The Myopic Greedy Procedure (MGP) is a heuristic which chooses the largest immediate unit time return at each decision epoch. This reduces computation time significantly, but the quality of the solution may be only 95% of optimal (for small size problems). Another alternative is a multi-start random constructive Hybrid Myopic Greedy Procedure (H-MGP), which incorporates stochastic variation in choosing an action at each stage, and repeats it a predetermined number of times (roughly 99.3% of optimal with 1000 repetitions). Finally, the One Stage Look Ahead (OSLA) procedure considers all the 'top choices' at each stage for a temporary time horizon and chooses the best action (roughly 98.8% of optimal with no repetition). Using OSLA procedure, we can have ameliorated solutions within a reasonable computation time. Other important issues discussed in this research are methodologies for the development of input parameters for real world applications.
Yu Wei; Michael Bevers; Erin Belval; Benjamin Bird
2015-01-01
This research developed a chance-constrained two-stage stochastic programming model to support wildfire initial attack resource acquisition and location on a planning unit for a fire season. Fire growth constraints account for the interaction between fire perimeter growth and construction to prevent overestimation of resource requirements. We used this model to examine...
De Lara, Michel
2006-05-01
In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.
Hoomans, Ties; Abrams, Keith R; Ament, Andre J H A; Evers, Silvia M A A; Severens, Johan L
2009-10-01
Decision making about resource allocation for guideline implementation to change clinical practice is inevitably undertaken in a context of uncertainty surrounding the cost-effectiveness of both clinical guidelines and implementation strategies. Adopting a total net benefit approach, a model was recently developed to overcome problems with the use of combined ratio statistics when analyzing decision uncertainty. To demonstrate the stochastic application of the model for informing decision making about the adoption of an audit and feedback strategy for implementing a guideline recommending intensive blood glucose control in type 2 diabetes in primary care in the Netherlands. An integrated Bayesian approach to decision modeling and evidence synthesis is adopted, using Markov Chain Monte Carlo simulation in WinBUGs. Data on model parameters is gathered from various sources, with effectiveness of implementation being estimated using pooled, random-effects meta-analysis. Decision uncertainty is illustrated using cost-effectiveness acceptability curves and frontier. Decisions about whether to adopt intensified glycemic control and whether to adopt audit and feedback alter for the maximum values that decision makers are willing to pay for health gain. Through simultaneously incorporating uncertain economic evidence on both guidance and implementation strategy, the cost-effectiveness acceptability curves and cost-effectiveness acceptability frontier show an increase in decision uncertainty concerning guideline implementation. The stochastic application in diabetes care demonstrates that the model provides a simple and useful tool for quantifying and exploring the (combined) uncertainty associated with decision making about adopting guidelines and implementation strategies and, therefore, for informing decisions about efficient resource allocation to change clinical practice.
NASA Astrophysics Data System (ADS)
Grafton, R. Quentin; Chu, Hoang Long; Stewardson, Michael; Kompas, Tom
2011-12-01
A key challenge in managing semiarid basins, such as in the Murray-Darling in Australia, is to balance the trade-offs between the net benefits of allocating water for irrigated agriculture, and other uses, versus the costs of reduced surface flows for the environment. Typically, water planners do not have the tools to optimally and dynamically allocate water among competing uses. We address this problem by developing a general stochastic, dynamic programming model with four state variables (the drought status, the current weather, weather correlation, and current storage) and two controls (environmental release and irrigation allocation) to optimally allocate water between extractions and in situ uses. The model is calibrated to Australia's Murray River that generates: (1) a robust qualitative result that "pulse" or artificial flood events are an optimal way to deliver environmental flows over and above conveyance of base flows; (2) from 2001 to 2009 a water reallocation that would have given less to irrigated agriculture and more to environmental flows would have generated between half a billion and over 3 billion U.S. dollars in overall economic benefits; and (3) water markets increase optimal environmental releases by reducing the losses associated with reduced water diversions.
Auctions with Dynamic Populations: Efficiency and Revenue Maximization
NASA Astrophysics Data System (ADS)
Said, Maher
We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.
NASA Astrophysics Data System (ADS)
Suo, M. Q.; Li, Y. P.; Huang, G. H.
2011-09-01
In this study, an inventory-theory-based interval-parameter two-stage stochastic programming (IB-ITSP) model is proposed through integrating inventory theory into an interval-parameter two-stage stochastic optimization framework. This method can not only address system uncertainties with complex presentation but also reflect transferring batch (the transferring quantity at once) and period (the corresponding cycle time) in decision making problems. A case of water allocation problems in water resources management planning is studied to demonstrate the applicability of this method. Under different flow levels, different transferring measures are generated by this method when the promised water cannot be met. Moreover, interval solutions associated with different transferring costs also have been provided. They can be used for generating decision alternatives and thus help water resources managers to identify desired policies. Compared with the ITSP method, the IB-ITSP model can provide a positive measure for solving water shortage problems and afford useful information for decision makers under uncertainty.
NASA Astrophysics Data System (ADS)
Bekri, Eleni; Yannopoulos, Panayotis; Disse, Markus
2014-05-01
The Alfeios River plays a vital role for Western Peloponnisos in Greece from natural, ecological, social and economic aspect. The main river and its six tributaries, forming the longest watercourse and the highest streamflow rate of Peloponnisose, represent a significant source of water supply for the region, aiming at delivering and satisfying the expected demands from a variety of water users, including irrigation, drinking water supply, hydropower production and recreation. In the previous EGU General Assembly, a fuzzy-boundary-interval linear programming methodology, based on Li et al. (2010) and Bekri et al. (2012), has been presented for optimal water allocation under uncertain and vague system conditions in the Alfeios River Basin. Uncertainties associated with the benefit and cost coefficient in the objective function of the main water uses (hydropower production and irrigation) were expressed as probability distributions and fuzzy boundary intervals derived by associated α-cut levels. The uncertainty of the monthly water inflows was not incorporated in the previous initial application and the analysis of all other sources of uncertainty has been applied to two extreme hydrologic years represented by a selected wet and dry year. To manage and operate the river system, decision makers should be able to analyze and evaluate the impact of various hydrologic scenarios. In the present work, the critical uncertain parameter of water inflows is analyzed and its incorporation as an additional type of uncertainty in the suggested methodology is investigated, in order to enable the assessment of optimal water allocation for hydrologic and socio-economic scenarios based both on historical data and projected climate change conditions. For this purpose, stochastic simulation analysis for a part of the Alfeios river system is undertaken, testing various stochastic models from simple stationary ones (AR and ARMA), Thomas-Fiering, ARIMA as well as more sophisticated and complete such as CASTALIA. A short description and comparison of their assumptions, the differences between them and the presentation of the results are included. Li, Y.P., Huang, G.H. and S.L., Nie, (2010), Planning water resources management systems using a fuzzy boundary interval-stochastic programming method, Elsevier Ltd, Advances in Water Resources, 33: 1105-1117. doi:10.1016/j.advwatres.2010.06.015 Bekri, E.S., Disse, M. and P.C.,Yannopoulos, (2012), Methodological framework for correction of quick river discharge measurements using quality characteristics, Session of Environmental Hydraulics - Hydrodynamics, 2nd Common Conference of Hellenic Hydrotechnical Association and Greek Committee for Water Resources Management, Volume: 546-557 (in Greek).
Optimal Budget Allocation for Sample Average Approximation
2011-06-01
an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn
2014-09-01
The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient formore » the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.« less
Adaptive logical stochastic resonance in time-delayed synthetic genetic networks
NASA Astrophysics Data System (ADS)
Zhang, Lei; Zheng, Wenbin; Song, Aiguo
2018-04-01
In the paper, the concept of logical stochastic resonance is applied to implement logic operation and latch operation in time-delayed synthetic genetic networks derived from a bacteriophage λ. Clear logic operation and latch operation can be obtained when the network is tuned by modulated periodic force and time-delay. In contrast with the previous synthetic genetic networks based on logical stochastic resonance, the proposed system has two advantages. On one hand, adding modulated periodic force to the background noise can increase the length of the optimal noise plateau of obtaining desired logic response and make the system adapt to varying noise intensity. On the other hand, tuning time-delay can extend the optimal noise plateau to larger range. The result provides possible help for designing new genetic regulatory networks paradigm based on logical stochastic resonance.
77 FR 5791 - Notice of Workshop
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-06
... of Capacity on New Merchant AD12-9-000 Transmission Projects and New Cost- Based, Participant-Funded... governing the allocation of capacity on new merchant transmission projects and new cost-based, participant....006 Allocation of Capacity on New Merchant Transmission Projects and New Cost-Based, Participant...
A Method for Increasing the Firepower of Virginia Class Cruisers.
1982-04-01
Invited paper peented at the CHIS mesting an wthmatical Problms In feyaan’s Park PP 212 Integrals. Marseille. France. May 22-26, 1978) (Poi shed Mengel ...1978, 234-253) PD AD55 536 AD ADSS 33 PP 223 PP 213 Mengel . Marc, "Stochastic mechanics of moleajielon Molecule Mengel , Marc, "Fluctuations In...Strategic Naqivresents end PP 225 Military PFeteret), Chicago, Ill., Septmer 2, 1978). Mengel . Marc. -Oscillations, Fluctuations, end lie Hopt AD A056
Stochastic Lanchester-type Combat Models I.
1979-10-01
necessarily hold when the attrition rates become non- linear in b and/or r. 13 iL 4. OTHER COMBAT MODELS In this section we briefly describe how other...AD-A092 898 FLORIDA STATE UNIV TALLAHASSEE DEPT OF STATISTICS F/6 12/2 STOCHASTIC LANCHESTER-TYPE COMBAT MODELS I.(U) OCT 79 L BILLARD N62271-79-M...COMBAT MODELS I by L. BILLARD October 1979 Approved for public release; distribution unlimited. Prepared for: Naval Postgraduate School Monterey, CA 93940
Impacts of Considering Climate Variability on Investment Decisions in Ethiopia
NASA Astrophysics Data System (ADS)
Strzepek, K.; Block, P.; Rosegrant, M.; Diao, X.
2005-12-01
In Ethiopia, climate extremes, inducing droughts or floods, are not unusual. Monitoring the effects of these extremes, and climate variability in general, is critical for economic prediction and assessment of the country's future welfare. The focus of this study involves adding climate variability to a deterministic, mean climate-driven agro-economic model, in an attempt to understand its effects and degree of influence on general economic prediction indicators for Ethiopia. Four simulations are examined, including a baseline simulation and three investment strategies: simulations of irrigation investment, roads investment, and a combination investment of both irrigation and roads. The deterministic model is transformed into a stochastic model by dynamically adding year-to-year climate variability through climate-yield factors. Nine sets of actual, historic, variable climate data are individually assembled and implemented into the 12-year stochastic model simulation, producing an ensemble of economic prediction indicators. This ensemble allows for a probabilistic approach to planning and policy making, allowing decision makers to consider risk. The economic indicators from the deterministic and stochastic approaches, including rates of return to investments, are significantly different. The predictions of the deterministic model appreciably overestimate the future welfare of Ethiopia; the predictions of the stochastic model, utilizing actual climate data, tend to give a better semblance of what may be expected. Inclusion of climate variability is vital for proper analysis of the predictor values from this agro-economic model.
Performance-based workload assessment: Allocation strategy and added task sensitivity
NASA Technical Reports Server (NTRS)
Vidulich, Michael A.
1990-01-01
The preliminary results of a research program investigating the use of added tasks to evaluate mental workload are reviewed. The focus of the first studies was a reappraisal of the traditional secondary task logic that encouraged the use of low-priority instructions for the added task. It was believed that such low-priority tasks would encourage subjects to split their available resources among the two tasks. The primary task would be assigned all the resources it needed, and any remaining reserve capacity would be assigned to the secondary task. If the model were correct, this approach was expected to combine sensitivity to primary task difficulty with unintrusiveness to primary task performance. The first studies of the current project demonstrated that a high-priority added task, although intrusive, could be more sensitive than the traditional low-priority secondary task. These results suggested that a more appropriate model of the attentional effects associated with added task performance might be based on capacity switching, rather than the traditional optimal allocation model.
Rt-Space: A Real-Time Stochastically-Provisioned Adaptive Container Environment
2017-08-04
SECURITY CLASSIFICATION OF: This project was directed at component-based soft real- time (SRT) systems implemented on multicore platforms. To facilitate...upon average-case or near- average-case task execution times . The main intellectual contribution of this project was the development of methods for...allocating CPU time to components and associated analysis for validating SRT correctness. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Forouzanfar, Fateme; Ebrahimnejad, Sadoullah
2013-07-01
This paper considers a single-sourcing network design problem for a three-level supply chain. For the first time, a novel mathematical model is presented considering risk-pooling, the inventory existence at distribution centers (DCs) under demand uncertainty, the existence of several alternatives to transport the product between facilities, and routing of vehicles from distribution centers to customer in a stochastic supply chain system, simultaneously. This problem is formulated as a bi-objective stochastic mixed-integer nonlinear programming model. The aim of this model is to determine the number of located distribution centers, their locations, and capacity levels, and allocating customers to distribution centers and distribution centers to suppliers. It also determines the inventory control decisions on the amount of ordered products and the amount of safety stocks at each opened DC, selecting a type of vehicle for transportation. Moreover, it determines routing decisions, such as determination of vehicles' routes starting from an opened distribution center to serve its allocated customers and returning to that distribution center. All are done in a way that the total system cost and the total transportation time are minimized. The Lingo software is used to solve the presented model. The computational results are illustrated in this paper.
Coordinating a Supply Chain with Price and Advertisement Dependent Stochastic Demand
Li, Liying; Wang, Yong; Yan, Xiaoming
2013-01-01
This paper investigates pricing and ordering as well as advertising coordination issues in a single-manufacturer single-retailer supply chain, where the manufacturer sells a newsvendor-type product through the retailer who faces a stochastic demand depending on both retail price and advertising expenditure. Under the assumption that the market demand has a multiplicative functional form, the Stackelberg and cooperative game models are developed, and the closed form solution to each model is provided as well. Comparisons and insights are presented. We show that a properly designed revenue-cost-sharing contract can achieve supply chain coordination and lead to a Pareto improving win-win situation for channel members. We also discuss the allocation of the extra joint profit according to individual supply chain members' risk preferences and negotiating powers. PMID:24453832
Coordinating a supply chain with price and advertisement dependent stochastic demand.
Li, Liying; Wang, Yong; Yan, Xiaoming
2013-01-01
This paper investigates pricing and ordering as well as advertising coordination issues in a single-manufacturer single-retailer supply chain, where the manufacturer sells a newsvendor-type product through the retailer who faces a stochastic demand depending on both retail price and advertising expenditure. Under the assumption that the market demand has a multiplicative functional form, the Stackelberg and cooperative game models are developed, and the closed form solution to each model is provided as well. Comparisons and insights are presented. We show that a properly designed revenue-cost-sharing contract can achieve supply chain coordination and lead to a Pareto improving win-win situation for channel members. We also discuss the allocation of the extra joint profit according to individual supply chain members' risk preferences and negotiating powers.
Wang, Deli; Xu, Wei; Zhao, Xiangrong
2016-03-01
This paper aims to deal with the stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation. First, the original stochastic viscoelastic system is converted to an equivalent stochastic system without viscoelastic terms by approximately adding the equivalent stiffness and damping. Relying on the means of non-smooth transformation of state variables, the above system is replaced by a new system without an impact term. Then, the stationary probability density functions of the system are observed analytically through stochastic averaging method. By considering the effects of the biquadratic nonlinear damping coefficient and the noise intensity on the system responses, the effectiveness of the theoretical method is tested by comparing the analytical results with those generated from Monte Carlo simulations. Additionally, it does deserve attention that some system parameters can induce the occurrence of stochastic P-bifurcation.
An application of information theory to stochastic classical gravitational fields
NASA Astrophysics Data System (ADS)
Angulo, J.; Angulo, J. C.; Angulo, J. M.
2018-06-01
The objective of this study lies on the incorporation of the concepts developed in the Information Theory (entropy, complexity, etc.) with the aim of quantifying the variation of the uncertainty associated with a stochastic physical system resident in a spatiotemporal region. As an example of application, a relativistic classical gravitational field has been considered, with a stochastic behavior resulting from the effect induced by one or several external perturbation sources. One of the key concepts of the study is the covariance kernel between two points within the chosen region. Using this concept and the appropriate criteria, a methodology is proposed to evaluate the change of uncertainty at a given spatiotemporal point, based on available information and efficiently applying the diverse methods that Information Theory provides. For illustration, a stochastic version of the Einstein equation with an added Gaussian Langevin term is analyzed.
A note on: "A Gaussian-product stochastic Gent-McWilliams parameterization"
NASA Astrophysics Data System (ADS)
Jansen, Malte F.
2017-02-01
This note builds on a recent article by Grooms (2016), which introduces a new stochastic parameterization for eddy buoyancy fluxes. The closure proposed by Grooms accounts for the fact that eddy fluxes arise as the product of two approximately Gaussian variables, which in turn leads to a distinctly non-Gaussian distribution. The directionality of the stochastic eddy fluxes, however, remains somewhat ad-hoc and depends on the reference frame of the chosen coordinate system. This note presents a modification of the approach proposed by Grooms, which eliminates this shortcoming. Eddy fluxes are computed based on a stochastic mixing length model, which leads to a frame invariant formulation. As in the original closure proposed by Grooms, eddy fluxes are proportional to the product of two Gaussian variables, and the parameterization reduces to the Gent and McWilliams parameterization for the mean buyoancy fluxes.
Reactive Power Pricing Model Considering the Randomness of Wind Power Output
NASA Astrophysics Data System (ADS)
Dai, Zhong; Wu, Zhou
2018-01-01
With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012.
Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed
2015-01-01
Assessment of hospitals' performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved.
Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012
Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed
2015-01-01
Background: Assessment of hospitals’ performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. Methods: This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. Results: According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. Conclusion: This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved. PMID:26793657
NASA Astrophysics Data System (ADS)
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
Simulation of anaerobic digestion processes using stochastic algorithm.
Palanichamy, Jegathambal; Palani, Sundarambal
2014-01-01
The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.
Sustainable infrastructure system modeling under uncertainties and dynamics
NASA Astrophysics Data System (ADS)
Huang, Yongxi
Infrastructure systems support human activities in transportation, communication, water use, and energy supply. The dissertation research focuses on critical transportation infrastructure and renewable energy infrastructure systems. The goal of the research efforts is to improve the sustainability of the infrastructure systems, with an emphasis on economic viability, system reliability and robustness, and environmental impacts. The research efforts in critical transportation infrastructure concern the development of strategic robust resource allocation strategies in an uncertain decision-making environment, considering both uncertain service availability and accessibility. The study explores the performances of different modeling approaches (i.e., deterministic, stochastic programming, and robust optimization) to reflect various risk preferences. The models are evaluated in a case study of Singapore and results demonstrate that stochastic modeling methods in general offers more robust allocation strategies compared to deterministic approaches in achieving high coverage to critical infrastructures under risks. This general modeling framework can be applied to other emergency service applications, such as, locating medical emergency services. The development of renewable energy infrastructure system development aims to answer the following key research questions: (1) is the renewable energy an economically viable solution? (2) what are the energy distribution and infrastructure system requirements to support such energy supply systems in hedging against potential risks? (3) how does the energy system adapt the dynamics from evolving technology and societal needs in the transition into a renewable energy based society? The study of Renewable Energy System Planning with Risk Management incorporates risk management into its strategic planning of the supply chains. The physical design and operational management are integrated as a whole in seeking mitigations against the potential risks caused by feedstock seasonality and demand uncertainty. Facility spatiality, time variation of feedstock yields, and demand uncertainty are integrated into a two-stage stochastic programming (SP) framework. In the study of Transitional Energy System Modeling under Uncertainty, a multistage stochastic dynamic programming is established to optimize the process of building and operating fuel production facilities during the transition. Dynamics due to the evolving technologies and societal changes and uncertainty due to demand fluctuations are the major issues to be addressed.
NASA Astrophysics Data System (ADS)
Silva, A. Christian; Prange, Richard E.
2007-03-01
We introduce the concept of virtual volatility. This simple but new measure shows how to quantify the uncertainty in the forecast of the drift component of a random walk. The virtual volatility also is a useful tool in understanding the stochastic process for a given portfolio. In particular, and as an example, we were able to identify mean reversion effect in our portfolio. Finally, we briefly discuss the potential practical effect of the virtual volatility on an investor asset allocation strategy.
van der Groen, Onno; Wenderoth, Nicole
2016-05-11
Random noise enhances the detectability of weak signals in nonlinear systems, a phenomenon known as stochastic resonance (SR). Though counterintuitive at first, SR has been demonstrated in a variety of naturally occurring processes, including human perception, where it has been shown that adding noise directly to weak visual, tactile, or auditory stimuli enhances detection performance. These results indicate that random noise can push subthreshold receptor potentials across the transfer threshold, causing action potentials in an otherwise silent afference. Despite the wealth of evidence demonstrating SR for noise added to a stimulus, relatively few studies have explored whether or not noise added directly to cortical networks enhances sensory detection. Here we administered transcranial random noise stimulation (tRNS; 100-640 Hz zero-mean Gaussian white noise) to the occipital region of human participants. For increasing tRNS intensities (ranging from 0 to 1.5 mA), the detection accuracy of a visual stimuli changed according to an inverted-U-shaped function, typical of the SR phenomenon. When the optimal level of noise was added to visual cortex, detection performance improved significantly relative to a zero noise condition (9.7 ± 4.6%) and to a similar extent as optimal noise added to the visual stimuli (11.2 ± 4.7%). Our results demonstrate that adding noise to cortical networks can improve human behavior and that tRNS is an appropriate tool to exploit this mechanism. Our findings suggest that neural processing at the network level exhibits nonlinear system properties that are sensitive to the stochastic resonance phenomenon and highlight the usefulness of tRNS as a tool to modulate human behavior. Since tRNS can be applied to all cortical areas, exploiting the SR phenomenon is not restricted to the perceptual domain, but can be used for other functions that depend on nonlinear neural dynamics (e.g., decision making, task switching, response inhibition, and many other processes). This will open new avenues for using tRNS to investigate brain function and enhance the behavior of healthy individuals or patients. Copyright © 2016 the authors 0270-6474/16/365289-10$15.00/0.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Song, Zhibao; Zhai, Junyong
2018-04-01
This paper addresses the problem of adaptive output-feedback control for a class of switched stochastic time-delay nonlinear systems with uncertain output function, where both the control coefficients and time-varying delay are unknown. The drift and diffusion terms are subject to unknown homogeneous growth condition. By virtue of adding a power integrator technique, an adaptive output-feedback controller is designed to render that the closed-loop system is bounded in probability, and the state of switched stochastic nonlinear system can be globally regulated to the origin almost surely. A numerical example is provided to demonstrate the validity of the proposed control method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal water resource allocation modelling in the Lowveld of Zimbabwe
NASA Astrophysics Data System (ADS)
Mhiribidi, Delight; Nobert, Joel; Gumindoga, Webster; Rwasoka, Donald T.
2018-05-01
The management and allocation of water from multi-reservoir systems is complex and thus requires dynamic modelling systems to achieve optimality. A multi-reservoir system in the Southern Lowveld of Zimbabwe is used for irrigation of sugarcane estates that produce sugar for both local and export consumption. The system is burdened with water allocation problems, made worse by decommissioning of dams. Thus the aim of this research was to develop an operating policy model for the Lowveld multi-reservoir system.The Mann Kendall Trend and Wilcoxon Signed-Rank tests were used to assess the variability of historic monthly rainfall and dam inflows for the period 1899-2015. The WEAP model was set up to evaluate the water allocation system of the catchment and come-up with a reference scenario for the 2015/2016 hydrologic year. Stochastic Dynamic Programming approach was used for optimisation of the multi-reservoirs releases.Results showed no significant trend in the rainfall but a significantly decreasing trend in inflows (p < 0.05). The water allocation model (WEAP) showed significant deficits ( ˜ 40 %) in irrigation water allocation in the reference scenario. The optimal rule curves for all the twelve months for each reservoir were obtained and considered to be a proper guideline for solving multi- reservoir management problems within the catchment. The rule curves are effective tools in guiding decision makers in the release of water without emptying the reservoirs but at the same time satisfying the demands based on the inflow, initial storage and end of month storage.
Removing Barriers for Effective Deployment of Intermittent Renewable Generation
NASA Astrophysics Data System (ADS)
Arabali, Amirsaman
The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.
2012-01-01
Background Alzheimer’s disease (AD) is the most frequently diagnosed neurodegenerative disorder affecting humans, with advanced age being the most prominent risk factor for developing AD. Despite intense research efforts aimed at elucidating the precise molecular underpinnings of AD, a definitive answer is still lacking. In recent years, consensus has grown that dimerisation of the polypeptide amyloid-beta (Aß), particularly Aß42, plays a crucial role in the neuropathology that characterise AD-affected post-mortem brains, including the large-scale accumulation of fibrils, also referred to as senile plaques. This has led to the realistic hope that targeting Aß42 immunotherapeutically could drastically reduce plaque burden in the ageing brain, thus delaying AD onset or symptom progression. Stochastic modelling is a useful tool for increasing understanding of the processes underlying complex systems-affecting disorders such as AD, providing a rapid and inexpensive strategy for testing putative new therapies. In light of the tool’s utility, we developed computer simulation models to examine Aß42 turnover and its aggregation in detail and to test the effect of immunization against Aß dimers. Results Our model demonstrates for the first time that even a slight decrease in the clearance rate of Aß42 monomers is sufficient to increase the chance of dimers forming, which could act as instigators of protofibril and fibril formation, resulting in increased plaque levels. As the process is slow and levels of Aβ are normally low, stochastic effects are important. Our model predicts that reducing the rate of dimerisation leads to a significant reduction in plaque levels and delays onset of plaque formation. The model was used to test the effect of an antibody mediated immunological response. Our results showed that plaque levels were reduced compared to conditions where antibodies are not present. Conclusion Our model supports the current thinking that levels of dimers are important in initiating the aggregation process. Although substantial knowledge exists regarding the process, no therapeutic intervention is on offer that reliably decreases disease burden in AD patients. Computer modelling could serve as one of a number of tools to examine both the validity of reliable biomarkers and aid the discovery of successful intervention strategies. PMID:22748062
NASA Astrophysics Data System (ADS)
Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit
2008-12-01
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
Che-Castaldo, Christian; Jenouvrier, Stephanie; Youngflesh, Casey; Shoemaker, Kevin T; Humphries, Grant; McDowall, Philip; Landrum, Laura; Holland, Marika M; Li, Yun; Ji, Rubao; Lynch, Heather J
2017-10-10
Colonially-breeding seabirds have long served as indicator species for the health of the oceans on which they depend. Abundance and breeding data are repeatedly collected at fixed study sites in the hopes that changes in abundance and productivity may be useful for adaptive management of marine resources, but their suitability for this purpose is often unknown. To address this, we fit a Bayesian population dynamics model that includes process and observation error to all known Adélie penguin abundance data (1982-2015) in the Antarctic, covering >95% of their population globally. We find that process error exceeds observation error in this system, and that continent-wide "year effects" strongly influence population growth rates. Our findings have important implications for the use of Adélie penguins in Southern Ocean feedback management, and suggest that aggregating abundance across space provides the fastest reliable signal of true population change for species whose dynamics are driven by stochastic processes.Adélie penguins are a key Antarctic indicator species, but data patchiness has challenged efforts to link population dynamics to key drivers. Che-Castaldo et al. resolve this issue using a pan-Antarctic Bayesian model to infer missing data, and show that spatial aggregation leads to more robust inference regarding dynamics.
NASA Astrophysics Data System (ADS)
Lu, Shasha; Guan, Xingliang; Zhou, Min; Wang, Yang
2014-05-01
A large number of mathematical models have been developed to support land resource allocation decisions and land management needs; however, few of them can address various uncertainties that exist in relation to many factors presented in such decisions (e.g., land resource availabilities, land demands, land-use patterns, and social demands, as well as ecological requirements). In this study, a multi-objective interval-stochastic land resource allocation model (MOISLAM) was developed for tackling uncertainty that presents as discrete intervals and/or probability distributions. The developed model improves upon the existing multi-objective programming and inexact optimization approaches. The MOISLAM not only considers economic factors, but also involves food security and eco-environmental constraints; it can, therefore, effectively reflect various interrelations among different aspects in a land resource management system. Moreover, the model can also help examine the reliability of satisfying (or the risk of violating) system constraints under uncertainty. In this study, the MOISLAM was applied to a real case of long-term urban land resource allocation planning in Suzhou, in the Yangtze River Delta of China. Interval solutions associated with different risk levels of constraint violation were obtained. The results are considered useful for generating a range of decision alternatives under various system conditions, and thus helping decision makers to identify a desirable land resource allocation strategy under uncertainty.
Mass sensing based on deterministic and stochastic responses of elastically coupled nanocantilevers.
Gil-Santos, Eduardo; Ramos, Daniel; Jana, Anirban; Calleja, Montserrat; Raman, Arvind; Tamayo, Javier
2009-12-01
Coupled nanomechanical systems and their entangled eigenstates offer unique opportunities for the detection of ultrasmall masses. In this paper we show theoretically and experimentally that the stochastic and deterministic responses of a pair of coupled nanocantilevers provide different and complementary information about the added mass of an analyte and its location. This method allows the sensitive detection of minute quantities of mass even in the presence of large initial differences in the active masses of the two cantilevers. Finally, we show the fundamental limits in mass detection of this sensing paradigm.
Fujiwara, Masami
2007-09-01
Viability status of populations is a commonly used measure for decision-making in the management of populations. One of the challenges faced by managers is the need to consistently allocate management effort among populations. This allocation should in part be based on comparison of extinction risks among populations. Unfortunately, common criteria that use minimum viable population size or count-based population viability analysis (PVA) often do not provide results that are comparable among populations, primarily because they lack consistency in determining population size measures and threshold levels of population size (e.g., minimum viable population size and quasi-extinction threshold). Here I introduce a new index called the "extinction-effective population index," which accounts for differential effects of demographic stochasticity among organisms with different life-history strategies and among individuals in different life stages. This index is expected to become a new way of determining minimum viable population size criteria and also complement the count-based PVA. The index accounts for the difference in life-history strategies of organisms, which are modeled using matrix population models. The extinction-effective population index, sensitivity, and elasticity are demonstrated in three species of Pacific salmonids. The interpretation of the index is also provided by comparing them with existing demographic indices. Finally, a measure of life-history-specific effect of demographic stochasticity is derived.
Zeng, X T; Huang, G H; Li, Y P; Zhang, J L; Cai, Y P; Liu, Z P; Liu, L R
2016-12-01
This study developed a fuzzy-stochastic programming with Green Z-score criterion (FSGZ) method for water resources allocation and water quality management with a trading-mechanism (WAQT) under uncertainties. FSGZ can handle uncertainties expressed as probability distributions, and it can also quantify objective/subjective fuzziness in the decision-making process. Risk-averse attitudes and robustness coefficient are joined to express the relationship between the expected target and outcome under various risk preferences of decision makers and systemic robustness. The developed method is applied to a real-world case of WAQT in the Kaidu-Kongque River Basin in northwest China, where an effective mechanism (e.g., market trading) to simultaneously confront severely diminished water availability and degraded water quality is required. Results of water transaction amounts, water allocation patterns, pollution mitigation schemes, and system benefits under various scenarios are analyzed, which indicate that a trading-mechanism is a more sustainable method to manage water-environment crisis in the study region. Additionally, consideration of anthropogenic (e.g., a risk-averse attitude) and systemic factors (e.g., the robustness coefficient) can support the generation of a robust plan associated with risk control for WAQT when uncertainty is present. These findings assist local policy and decision makers to gain insights into water-environment capacity planning to balance the basin's social and economic growth with protecting the region's ecosystems.
45 CFR 402.31 - Determination of allocations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... included in the computation of its allocation for a fiscal year by adding to the sum of SLIAG-related costs..., pursuant to § 402.41(c) (1) and (2). For fiscal years 1993 and 1994, the Department will add to the amount... the event that a State has not submitted an approved report for a fiscal year, the Department will...
First-passage times for pattern formation in nonlocal partial differential equations
NASA Astrophysics Data System (ADS)
Cáceres, Manuel O.; Fuentes, Miguel A.
2015-10-01
We describe the lifetimes associated with the stochastic evolution from an unstable uniform state to a patterned one when the time evolution of the field is controlled by a nonlocal Fisher equation. A small noise is added to the evolution equation to define the lifetimes and to calculate the mean first-passage time of the stochastic field through a given threshold value, before the patterned steady state is reached. In order to obtain analytical results we introduce a stochastic multiscale perturbation expansion. This multiscale expansion can also be used to tackle multiplicative stochastic partial differential equations. A critical slowing down is predicted for the marginal case when the Fourier phase of the unstable initial condition is null. We carry out Monte Carlo simulations to show the agreement with our theoretical predictions. Analytic results for the bifurcation point and asymptotic analysis of traveling wave-front solutions are included to get insight into the noise-induced transition phenomena mediated by invading fronts.
First-passage times for pattern formation in nonlocal partial differential equations.
Cáceres, Manuel O; Fuentes, Miguel A
2015-10-01
We describe the lifetimes associated with the stochastic evolution from an unstable uniform state to a patterned one when the time evolution of the field is controlled by a nonlocal Fisher equation. A small noise is added to the evolution equation to define the lifetimes and to calculate the mean first-passage time of the stochastic field through a given threshold value, before the patterned steady state is reached. In order to obtain analytical results we introduce a stochastic multiscale perturbation expansion. This multiscale expansion can also be used to tackle multiplicative stochastic partial differential equations. A critical slowing down is predicted for the marginal case when the Fourier phase of the unstable initial condition is null. We carry out Monte Carlo simulations to show the agreement with our theoretical predictions. Analytic results for the bifurcation point and asymptotic analysis of traveling wave-front solutions are included to get insight into the noise-induced transition phenomena mediated by invading fronts.
Adaptive Neural Tracking Control for Switched High-Order Stochastic Nonlinear Systems.
Zhao, Xudong; Wang, Xinyong; Zong, Guangdeng; Zheng, Xiaolong
2017-10-01
This paper deals with adaptive neural tracking control design for a class of switched high-order stochastic nonlinear systems with unknown uncertainties and arbitrary deterministic switching. The considered issues are: 1) completely unknown uncertainties; 2) stochastic disturbances; and 3) high-order nonstrict-feedback system structure. The considered mathematical models can represent many practical systems in the actual engineering. By adopting the approximation ability of neural networks, common stochastic Lyapunov function method together with adding an improved power integrator technique, an adaptive state feedback controller with multiple adaptive laws is systematically designed for the systems. Subsequently, a controller with only two adaptive laws is proposed to solve the problem of over parameterization. Under the designed controllers, all the signals in the closed-loop system are bounded-input bounded-output stable in probability, and the system output can almost surely track the target trajectory within a specified bounded error. Finally, simulation results are presented to show the effectiveness of the proposed approaches.
Mathematical issues in eternal inflation
NASA Astrophysics Data System (ADS)
Singh Kohli, Ikjyot; Haslam, Michael C.
2015-04-01
In this paper, we consider the problem of the existence and uniqueness of solutions to the Einstein field equations for a spatially flat Friedmann-Lemaître-Robertson-Walker universe in the context of stochastic eternal inflation, where the stochastic mechanism is modelled by adding a stochastic forcing term representing Gaussian white noise to the Klein-Gordon equation. We show that under these considerations, the Klein-Gordon equation actually becomes a stochastic differential equation. Therefore, the existence and uniqueness of solutions to Einstein’s equations depend on whether the coefficients of this stochastic differential equation obey Lipschitz continuity conditions. We show that for any choice of V(φ ), the Einstein field equations are not globally well-posed, hence, any solution found to these equations is not guaranteed to be unique. Instead, the coefficients are at best locally Lipschitz continuous in the physical state space of the dynamical variables, which only exist up to a finite explosion time. We further perform Feller’s explosion test for an arbitrary power-law inflaton potential and prove that all solutions to the Einstein field equations explode in a finite time with probability one. This implies that the mechanism of stochastic inflation thus considered cannot be described to be eternal, since the very concept of eternal inflation implies that the process continues indefinitely. We therefore argue that stochastic inflation based on a stochastic forcing term would not produce an infinite number of universes in some multiverse ensemble. In general, since the Einstein field equations in both situations are not well-posed, we further conclude that the existence of a multiverse via the stochastic eternal inflation mechanism considered in this paper is still very much an open question that will require much deeper investigation.
26 CFR 1.409(p)-1 - Prohibited allocation of securities in an S corporation.
Code of Federal Regulations, 2013 CFR
2013-04-01
...) of this chapter) that, for the nonallocation year, would have been added to the account of the...)(2), is treated as distributed on the date of the prohibited allocation. Thus, the fair market value... the 1,000 outstanding shares of stock of Corporation M, with a fair market value of $30 per share, are...
26 CFR 1.409(p)-1 - Prohibited allocation of securities in an S corporation.
Code of Federal Regulations, 2011 CFR
2011-04-01
...) of this chapter) that, for the nonallocation year, would have been added to the account of the...)(2), is treated as distributed on the date of the prohibited allocation. Thus, the fair market value... the 1,000 outstanding shares of stock of Corporation M, with a fair market value of $30 per share, are...
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).
SLFP: a stochastic linear fractional programming approach for sustainable waste management.
Zhu, H; Huang, G H
2011-12-01
A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.
Access Protocol For An Industrial Optical Fibre LAN
NASA Astrophysics Data System (ADS)
Senior, John M.; Walker, William M.; Ryley, Alan
1987-09-01
A structure for OSI levels 1 and 2 of a local area network suitable for use in a variety of industrial environments is reported. It is intended that the LAN will utilise optical fibre technology at the physical level and a hybrid of dynamically optimisable token passing and CSMA/CD techniques at the data link (IEEE 802 medium access control - logical link control) level. An intelligent token passing algorithm is employed which dynamically allocates tokens according to the known upper limits on the requirements of each device. In addition a system of stochastic tokens is used to increase efficiency when the stochastic traffic is significant. The protocol also allows user-defined priority systems to be employed and is suitable for distributed or centralised implementation. The results of computer simulated performance characteristics for the protocol using a star-ring topology are reported which demonstrate its ability to perform efficiently with the device and traffic loads anticipated within an industrial environment.
NASA Technical Reports Server (NTRS)
Dahl, Roy W.; Keating, Karen; Salamone, Daryl J.; Levy, Laurence; Nag, Barindra; Sanborn, Joan A.
1987-01-01
This paper presents an algorithm (WHAMII) designed to solve the Artificial Intelligence Design Challenge at the 1987 AIAA Guidance, Navigation and Control Conference. The problem under consideration is a stochastic generalization of the traveling salesman problem in which travel costs can incur a penalty with a given probability. The variability in travel costs leads to a probability constraint with respect to violating the budget allocation. Given the small size of the problem (eleven cities), an approach is considered that combines partial tour enumeration with a heuristic city insertion procedure. For computational efficiency during both the enumeration and insertion procedures, precalculated binomial probabilities are used to determine an upper bound on the actual probability of violating the budget constraint for each tour. The actual probability is calculated for the final best tour, and additional insertions are attempted until the actual probability exceeds the bound.
A Framework for Optimal Control Allocation with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc
2010-01-01
Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.
NASA Astrophysics Data System (ADS)
Rowley, C. D.; Hogan, P. J.; Martin, P.; Thoppil, P.; Wei, M.
2017-12-01
An extended range ensemble forecast system is being developed in the US Navy Earth System Prediction Capability (ESPC), and a global ocean ensemble generation capability to represent uncertainty in the ocean initial conditions has been developed. At extended forecast times, the uncertainty due to the model error overtakes the initial condition as the primary source of forecast uncertainty. Recently, stochastic parameterization or stochastic forcing techniques have been applied to represent the model error in research and operational atmospheric, ocean, and coupled ensemble forecasts. A simple stochastic forcing technique has been developed for application to US Navy high resolution regional and global ocean models, for use in ocean-only and coupled atmosphere-ocean-ice-wave ensemble forecast systems. Perturbation forcing is added to the tendency equations for state variables, with the forcing defined by random 3- or 4-dimensional fields with horizontal, vertical, and temporal correlations specified to characterize different possible kinds of error. Here, we demonstrate the stochastic forcing in regional and global ensemble forecasts with varying perturbation amplitudes and length and time scales, and assess the change in ensemble skill measured by a range of deterministic and probabilistic metrics.
Optimal Decisions for Organ Exchanges in a Kidney Paired Donation Program.
Li, Yijiang; Song, Peter X-K; Zhou, Yan; Leichtman, Alan B; Rees, Michael A; Kalbfleisch, John D
2014-05-01
The traditional concept of barter exchange in economics has been extended in the modern era to the area of living-donor kidney transplantation, where one incompatible donor-candidate pair is matched to another pair with a complementary incompatibility, such that the donor from one pair gives an organ to a compatible candidate in the other pair and vice versa. Kidney paired donation (KPD) programs provide a unique and important platform for living incompatible donor-candidate pairs to exchange organs in order to achieve mutual benefit. In this paper, we propose novel organ allocation strategies to arrange kidney exchanges under uncertainties with advantages, including (i) allowance for a general utility-based evaluation of potential kidney transplants and an explicit consideration of stochastic features inherent in a KPD program; and (ii) exploitation of possible alternative exchanges when the originally planned allocation cannot be fully executed. This allocation strategy is implemented using an integer programming (IP) formulation, and its implication is assessed via a data-based simulation system by tracking an evolving KPD program over a series of match runs. Extensive simulation studies are provided to illustrate our proposed approach.
Chen, Xiujuan; Huang, Guohe; Zhao, Shan; Cheng, Guanhui; Wu, Yinghui; Zhu, Hua
2017-11-01
In this study, a stochastic fractional inventory-theory-based waste management planning (SFIWP) model was developed and applied for supporting long-term planning of the municipal solid waste (MSW) management in Xiamen City, the special economic zone of Fujian Province, China. In the SFIWP model, the techniques of inventory model, stochastic linear fractional programming, and mixed-integer linear programming were integrated in a framework. Issues of waste inventory in MSW management system were solved, and the system efficiency was maximized through considering maximum net-diverted wastes under various constraint-violation risks. Decision alternatives for waste allocation and capacity expansion were also provided for MSW management planning in Xiamen. The obtained results showed that about 4.24 × 10 6 t of waste would be diverted from landfills when p i is 0.01, which accounted for 93% of waste in Xiamen City, and the waste diversion per unit of cost would be 26.327 × 10 3 t per $10 6 . The capacities of MSW management facilities including incinerators, composting facility, and landfills would be expanded due to increasing waste generation rate.
A model of interaction between anticorruption authority and corruption groups
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neverova, Elena G.; Malafeyef, Oleg A.
The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game.
Efficiency and Productivity Analysis of Multidivisional Firms
NASA Astrophysics Data System (ADS)
Gong, Binlei
Multidivisional firms are those who have footprints in multiple segments and hence using multiple technologies to convert inputs to outputs, which makes it difficult to estimate the resource allocations, aggregated production functions, and technical efficiencies of this type of companies. This dissertation aims to explore and reveal such unobserved information by several parametric and semiparametric stochastic frontier analyses and some other structural models. In the empirical study, this dissertation analyzes the productivity and efficiency for firms in the global oilfield market.
Optimal Vaccination in a Stochastic Epidemic Model of Two Non-Interacting Populations
2015-02-17
of diminishing returns from vacci- nation will generally take place at smaller vaccine allocations V compared to the deterministic model. Optimal...take place and small r0 values where it does not is illustrat- ed in Fig. 4C. As r0 is decreased, the region between the two instances of switching...approximately distribute vaccine in proportion to population size. For large r0 (r0 ≳ 2.9), two switches take place . In the deterministic optimal solution, a
Selection of Noisy Sensors and Actuators for Regulation of Linear Systems.
1983-08-01
and the inability of (5.8) to account for the possibility of the loss of controllability or stabilizability of the system If a particular actuator is...design by performing the checks tThe condition q4 can result only when a stabilizable , detectable system Is not obtput controllable and one of the...M.R., and Installe, M.J., "Optimal sensors’ allocation strategies for a class of stochastic distributed systems ," Int. J. Control , 1975, Vol. 22, No. 2
Restoring the encoding properties of a stochastic neuron model by an exogenous noise
Paffi, Alessandra; Camera, Francesca; Apollonio, Francesca; d'Inzeo, Guglielmo; Liberti, Micaela
2015-01-01
Here we evaluate the possibility of improving the encoding properties of an impaired neuronal system by superimposing an exogenous noise to an external electric stimulation signal. The approach is based on the use of mathematical neuron models consisting of stochastic HH-like circuit, where the impairment of the endogenous presynaptic inputs is described as a subthreshold injected current and the exogenous stimulation signal is a sinusoidal voltage perturbation across the membrane. Our results indicate that a correlated Gaussian noise, added to the sinusoidal signal can significantly increase the encoding properties of the impaired system, through the Stochastic Resonance (SR) phenomenon. These results suggest that an exogenous noise, suitably tailored, could improve the efficacy of those stimulation techniques used in neuronal systems, where the presynaptic sensory neurons are impaired and have to be artificially bypassed. PMID:25999845
Ower, Alison K.; de Wolf, Frank; Anderson, Roy M.
2018-01-01
Alzheimer’s disease (AD) is a neurodegenerative disorder characterised by a slow progressive deterioration of cognitive capacity. Drugs are urgently needed for the treatment of AD and unfortunately almost all clinical trials of AD drug candidates have failed or been discontinued to date. Mathematical, computational and statistical tools can be employed in the construction of clinical trial simulators to assist in the improvement of trial design and enhance the chances of success of potential new therapies. Based on the analysis of a set of clinical data provided by the Alzheimer's Disease Neuroimaging Initiative (ADNI) we developed a simple stochastic mathematical model to simulate the development and progression of Alzheimer’s in a longitudinal cohort study. We show how this modelling framework could be used to assess the effect and the chances of success of hypothetical treatments that are administered at different stages and delay disease development. We demonstrate that the detection of the true efficacy of an AD treatment can be very challenging, even if the treatment is highly effective. An important reason behind the inability to detect signals of efficacy in a clinical trial in this therapy area could be the high between- and within-individual variability in the measurement of diagnostic markers and endpoints, which consequently results in the misdiagnosis of an individual’s disease state. PMID:29377891
Hadjichrysanthou, Christoforos; Ower, Alison K; de Wolf, Frank; Anderson, Roy M
2018-01-01
Alzheimer's disease (AD) is a neurodegenerative disorder characterised by a slow progressive deterioration of cognitive capacity. Drugs are urgently needed for the treatment of AD and unfortunately almost all clinical trials of AD drug candidates have failed or been discontinued to date. Mathematical, computational and statistical tools can be employed in the construction of clinical trial simulators to assist in the improvement of trial design and enhance the chances of success of potential new therapies. Based on the analysis of a set of clinical data provided by the Alzheimer's Disease Neuroimaging Initiative (ADNI) we developed a simple stochastic mathematical model to simulate the development and progression of Alzheimer's in a longitudinal cohort study. We show how this modelling framework could be used to assess the effect and the chances of success of hypothetical treatments that are administered at different stages and delay disease development. We demonstrate that the detection of the true efficacy of an AD treatment can be very challenging, even if the treatment is highly effective. An important reason behind the inability to detect signals of efficacy in a clinical trial in this therapy area could be the high between- and within-individual variability in the measurement of diagnostic markers and endpoints, which consequently results in the misdiagnosis of an individual's disease state.
Stochastic volatility models and Kelvin waves
NASA Astrophysics Data System (ADS)
Lipton, Alex; Sepp, Artur
2008-08-01
We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics.
NASA Astrophysics Data System (ADS)
Jeanmairet, Guillaume; Sharma, Sandeep; Alavi, Ali
2017-01-01
In this article we report a stochastic evaluation of the recently proposed multireference linearized coupled cluster theory [S. Sharma and A. Alavi, J. Chem. Phys. 143, 102815 (2015)]. In this method, both the zeroth-order and first-order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth-order wavefunction follows a set of stochastic processes identical to the one used in the full configuration interaction quantum Monte Carlo (FCIQMC) method. To sample the first-order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first-order wavefunction from the zeroth-order wavefunction. The second-order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first-order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory. The method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard complete active space self consistent field theory with perturbative correction, we find the singlet-triplet gap to be in good agreement with the experimental values.
76 FR 71919 - Corporate Reorganizations; Allocation of Basis in “All Cash D” Reorganizations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-21
... follows: Authority: 26 U.S.C. 7805 * * * Section 1.358-2 also issued under 26 U.S.C. 358(b)(1). Par. 2. Section 1.358-2 is amended by: 1. Revising paragraph (a)(2)(iii). 2. Adding a new Example 15 and Example....358-2 Allocation of basis among nonrecognition property. (a) * * * (2) * * * (iii) [The text of this...
Effluent trading in river systems through stochastic decision-making process: a case study.
Zolfagharipoor, Mohammad Amin; Ahmadi, Azadeh
2017-09-01
The objective of this paper is to provide an efficient framework for effluent trading in river systems. The proposed framework consists of two pessimistic and optimistic decision-making models to increase the executability of river water quality trading programs. The models used for this purpose are (1) stochastic fallback bargaining (SFB) to reach an agreement among wastewater dischargers and (2) stochastic multi-criteria decision-making (SMCDM) to determine the optimal treatment strategy. The Monte-Carlo simulation method is used to incorporate the uncertainty into analysis. This uncertainty arises from stochastic nature and the errors in the calculation of wastewater treatment costs. The results of river water quality simulation model are used as the inputs of models. The proposed models are used in a case study on the Zarjoub River in northern Iran to determine the best solution for the pollution load allocation. The best treatment alternatives selected by each model are imported, as the initial pollution discharge permits, into an optimization model developed for trading of pollution discharge permits among pollutant sources. The results show that the SFB-based water pollution trading approach reduces the costs by US$ 14,834 while providing a relative consensus among pollutant sources. Meanwhile, the SMCDM-based water pollution trading approach reduces the costs by US$ 218,852, but it is less acceptable by pollutant sources. Therefore, it appears that giving due attention to stability, or in other words acceptability of pollution trading programs for all pollutant sources, is an essential element of their success.
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2014-05-01
Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.
SOME EFFECTS OF ADVERTISING AND PRICES ON OPTIMAL INVENTORY POLICY.
An inventory model which includes the possibility of advertising (called the basic model) is investigated. This model is a stochastic inventory...generalizations of the basic model are then considered. One generalization considers the situation where the added demand due to advertising is not
NASA Astrophysics Data System (ADS)
Xiang, Suyun; Wang, Wei; Xiang, Bingren; Deng, Haishan; Xie, Shaofei
2007-05-01
The periodic modulation-based stochastic resonance algorithm (PSRA) was used to amplify and detect the weak liquid chromatography-mass spectrometry (LC-MS) signal of granisetron in plasma. In the algorithm, the stochastic resonance (SR) was achieved by introducing an external periodic force to the nonlinear system. The optimization of parameters was carried out in two steps to give attention to both the signal-to-noise ratio (S/N) and the peak shape of output signal. By applying PSRA with the optimized parameters, the signal-to-noise ratio of LC-MS peak was enhanced significantly and distorted peak shape that often appeared in the traditional stochastic resonance algorithm was corrected by the added periodic force. Using the signals enhanced by PSRA, this method extended the limit of detection (LOD) and limit of quantification (LOQ) of granisetron in plasma from 0.05 and 0.2 ng/mL, respectively, to 0.01 and 0.02 ng/mL, and exhibited good linearity, accuracy and precision, which ensure accurate determination of the target analyte.
Stochastic resonance effects reveal the neural mechanisms of transcranial magnetic stimulation
Schwarzkopf, Dietrich Samuel; Silvanto, Juha; Rees, Geraint
2011-01-01
Transcranial magnetic stimulation (TMS) is a popular method for studying causal relationships between neural activity and behavior. However its mode of action remains controversial, and so far there is no framework to explain its wide range of facilitatory and inhibitory behavioral effects. While some theoretical accounts suggests that TMS suppresses neuronal processing, other competing accounts propose that the effects of TMS result from the addition of noise to neuronal processing. Here we exploited the stochastic resonance phenomenon to distinguish these theoretical accounts and determine how TMS affects neuronal processing. Specifically, we showed that online TMS can induce stochastic resonance in the human brain. At low intensity, TMS facilitated the detection of weak motion signals but with higher TMS intensities and stronger motion signals we found only impairment in detection. These findings suggest that TMS acts by adding noise to neuronal processing, at least in an online TMS protocol. Importantly, such stochastic resonance effects may also explain why TMS parameters that under normal circumstances impair behavior, can induce behavioral facilitations when the stimulated area is in an adapted or suppressed state. PMID:21368025
NASA Astrophysics Data System (ADS)
Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz
2015-02-01
In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.
Optimizing Real-Time Vaccine Allocation in a Stochastic SIR Model
Nguyen, Chantal; Carlson, Jean M.
2016-01-01
Real-time vaccination following an outbreak can effectively mitigate the damage caused by an infectious disease. However, in many cases, available resources are insufficient to vaccinate the entire at-risk population, logistics result in delayed vaccine deployment, and the interaction between members of different cities facilitates a wide spatial spread of infection. Limited vaccine, time delays, and interaction (or coupling) of cities lead to tradeoffs that impact the overall magnitude of the epidemic. These tradeoffs mandate investigation of optimal strategies that minimize the severity of the epidemic by prioritizing allocation of vaccine to specific subpopulations. We use an SIR model to describe the disease dynamics of an epidemic which breaks out in one city and spreads to another. We solve a master equation to determine the resulting probability distribution of the final epidemic size. We then identify tradeoffs between vaccine, time delay, and coupling, and we determine the optimal vaccination protocols resulting from these tradeoffs. PMID:27043931
Dynamic Resource Allocation in Disaster Response: Tradeoffs in Wildfire Suppression
Petrovic, Nada; Alderson, David L.; Carlson, Jean M.
2012-01-01
Challenges associated with the allocation of limited resources to mitigate the impact of natural disasters inspire fundamentally new theoretical questions for dynamic decision making in coupled human and natural systems. Wildfires are one of several types of disaster phenomena, including oil spills and disease epidemics, where (1) the disaster evolves on the same timescale as the response effort, and (2) delays in response can lead to increased disaster severity and thus greater demand for resources. We introduce a minimal stochastic process to represent wildfire progression that nonetheless accurately captures the heavy tailed statistical distribution of fire sizes observed in nature. We then couple this model for fire spread to a series of response models that isolate fundamental tradeoffs both in the strength and timing of response and also in division of limited resources across multiple competing suppression efforts. Using this framework, we compute optimal strategies for decision making scenarios that arise in fire response policy. PMID:22514605
Li, Yongping; Huang, Guohe
2009-03-01
In this study, a dynamic analysis approach based on an inexact multistage integer programming (IMIP) model is developed for supporting municipal solid waste (MSW) management under uncertainty. Techniques of interval-parameter programming and multistage stochastic programming are incorporated within an integer-programming framework. The developed IMIP can deal with uncertainties expressed as probability distributions and interval numbers, and can reflect the dynamics in terms of decisions for waste-flow allocation and facility-capacity expansion over a multistage context. Moreover, the IMIP can be used for analyzing various policy scenarios that are associated with different levels of economic consequences. The developed method is applied to a case study of long-term waste-management planning. The results indicate that reasonable solutions have been generated for binary and continuous variables. They can help generate desired decisions of system-capacity expansion and waste-flow allocation with a minimized system cost and maximized system reliability.
77 FR 50447 - Federal Management Regulation; Donation of Surplus Personal Property
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-21
...: Authority: 40 U.S.C. 549 and 121(c). 2. Amend Sec. 102-37.25 by alphabetically adding the definition ``Allocation'' to read as follows: Sec. 102-37.25 What definitions apply to this part? The following... removing the words ``being notified that the property is available for pickup'' and adding the words ``GSA...
Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance.
Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca
2016-01-01
Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents.
Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance
Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca
2016-01-01
Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents. PMID:27525414
Sparse Learning with Stochastic Composite Optimization.
Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei
2017-06-01
In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).
A Stochastic Model of Eye Lens Growth
Šikić, Hrvoje; Shi, Yanrong; Lubura, Snježana; Bassnett, Steven
2015-01-01
The size and shape of the ocular lens must be controlled with precision if light is to be focused sharply on the retina. The lifelong growth of the lens depends on the production of cells in the anterior epithelium. At the lens equator, epithelial cells differentiate into fiber cells, which are added to the surface of the existing fiber cell mass, increasing its volume and area. We developed a stochastic model relating the rates of cell proliferation and death in various regions of the lens epithelium to deposition of fiber cells and lens growth. Epithelial population dynamics were modeled as a branching process with emigration and immigration between various proliferative zones. Numerical simulations were in agreement with empirical measurements and demonstrated that, operating within the strict confines of lens geometry, a stochastic growth engine can produce the smooth and precise growth necessary for lens function. PMID:25816743
Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo
2010-11-01
Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity/diversity analysis and drug discovery protocols.
Clarkson, Pamela M; Beachy, Christopher K
2015-12-01
We tested the hypothesis that salamanders growing at different rates would have allocation patterns that differ among male and female metamorphic and larval salamanders. We raised individual axolotls, Ambystoma mexicanum , on four food regimes: constant high growth (throughout the experiment), constant low growth (restricted throughout the experiment), high growth switched to low growth (ad libitum switched after 140 d to restricted), and low growth switched to high growth (restricted switched after 140 d to ad libitum). Because axolotls are obligate paedomorphs, we exposed half of the salamanders to thyroid hormone to induce metamorphosis. We assayed growth and dissected and weighed gonads and fat bodies. Salamanders that were switched from restricted to ad libitum food regime delayed metamorphosis. In all treatment groups, females had larger gonads than males and males had larger fat bodies than females. The association between storage and reproduction differed between larvae and metamorphs and depended on sex.
Joint optimization of regional water-power systems
NASA Astrophysics Data System (ADS)
Pereira-Cardenal, Silvio J.; Mo, Birger; Gjelsvik, Anders; Riegels, Niels D.; Arnbjerg-Nielsen, Karsten; Bauer-Gottwein, Peter
2016-06-01
Energy and water resources systems are tightly coupled; energy is needed to deliver water and water is needed to extract or produce energy. Growing pressure on these resources has raised concerns about their long-term management and highlights the need to develop integrated solutions. A method for joint optimization of water and electric power systems was developed in order to identify methodologies to assess the broader interactions between water and energy systems. The proposed method is to include water users and power producers into an economic optimization problem that minimizes the cost of power production and maximizes the benefits of water allocation, subject to constraints from the power and hydrological systems. The method was tested on the Iberian Peninsula using simplified models of the seven major river basins and the power market. The optimization problem was successfully solved using stochastic dual dynamic programming. The results showed that current water allocation to hydropower producers in basins with high irrigation productivity, and to irrigation users in basins with high hydropower productivity was sub-optimal. Optimal allocation was achieved by managing reservoirs in very distinct ways, according to the local inflow, storage capacity, hydropower productivity, and irrigation demand and productivity. This highlights the importance of appropriately representing the water users' spatial distribution and marginal benefits and costs when allocating water resources optimally. The method can handle further spatial disaggregation and can be extended to include other aspects of the water-energy nexus.
Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352
Burger, Christian; Schade, Volker; Lindner, Christina; Radlinger, Lorenz; Elfering, Achim
2012-01-01
This study examined the effects of stochastic resonance whole-body vibration training on work-related musculoskeletal symptoms and accidents. Participants were white and blue-collar employees of a Swiss metal manufacturer (N=38), and participation was voluntary. The study was designed as a switching-replications longitudinal trial with randomized group allocation. The randomized controlled cross-over design consisted of two groups each given four weeks of exercise and no intervention during a second four-week period. Outcome was measured on a daily basis with questionnaires. Three components constituted musculoskeletal symptoms: musculoskeletal pain, related function limitations and musculoskeletal well-being. Accidents were assessed by ratings for balance and daily near-accidents. For statistical analysis, a mixed model was calculated. At the end of the training period musculoskeletal pain and related function limitation were significantly reduced, whereas musculoskeletal well-being had significantly increased. For function limitation and musculoskeletal well-being, change over time was linear. There was no effect on balance or near-accidents. Stochastic resonance whole-body vibration was found to be effective in the prevention of work-related musculoskeletal symptoms. It is well suited for the use in a work environment since it requires very little effort in terms of infrastructure, time and investment from participants.
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.
Stochasticity, succession, and environmental perturbations in a fluidic ecosystem.
Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A; Hazen, Terry C; Tiedje, James M; Arkin, Adam P
2014-03-04
Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession.
NASA Astrophysics Data System (ADS)
Bekri, Eleni; Yannopoulos, Panayotis; Disse, Markus
2013-04-01
In the present study, a combined linear programming methodology, based on Li et al. (2010) and Bekri et al. (2012), is employed for optimizing water allocation under uncertain system conditions in the Alfeios River Basin, in Greece. The Alfeios River is a water resources system of great natural, ecological, social and economic importance for Western Greece, since it has the longest and highest flow rate watercourse in the Peloponnisos region. Moreover, the river basin was exposed in the last decades to a plethora of environmental stresses (e.g. hydrogeological alterations, intensively irrigated agriculture, surface and groundwater overexploitation and infrastructure developments), resulting in the degradation of its quantitative and qualitative characteristics. As in most Mediterranean countries, water resource management in Alfeios River Basin has been focused up to now on an essentially supply-driven approach. It is still characterized by a lack of effective operational strategies. Authority responsibility relationships are fragmented, and law enforcement and policy implementation are weak. The present regulated water allocation puzzle entails a mixture of hydropower generation, irrigation, drinking water supply and recreational activities. Under these conditions its water resources management is characterised by high uncertainty and by vague and imprecise data. The considered methodology has been developed in order to deal with uncertainties expressed as either probability distributions, or/and fuzzy boundary intervals, derived by associated α-cut levels. In this framework a set of deterministic submodels is studied through linear programming. The ad hoc water resources management and alternative management patterns in an Alfeios subbasin are analyzed and evaluated under various scenarios, using the above mentioned methodology, aiming to promote a sustainable and equitable water management. Li, Y.P., Huang, G.H. and S.L., Nie, (2010), Planning water resources management systems using a fuzzy-boundary interval-stochastic programming method, Elsevier Ltd, Advances in Water Resources, 33: 1105-1117. doi:10.1016/j.advwatres.2010.06.015 Bekri, E.S., Disse, M. and P.C.,Yannopoulos, (2012), Methodological framework for correction of quick river discharge measurements using quality characteristics, Session of Environmental Hydraulics - Hydrodynamics, 2nd Common Conference of Hellenic Hydrotechnical Association and Greek Committee for Water Resources Management, Volume: 546-557 (in Greek).
Król, Magdalena Ewa
2018-01-01
We investigated the effect of auditory noise added to speech on patterns of looking at faces in 40 toddlers. We hypothesised that noise would increase the difficulty of processing speech, making children allocate more attention to the mouth of the speaker to gain visual speech cues from mouth movements. We also hypothesised that this shift would cause a decrease in fixation time to the eyes, potentially decreasing the ability to monitor gaze. We found that adding noise increased the number of fixations to the mouth area, at the price of a decreased number of fixations to the eyes. Thus, to our knowledge, this is the first study demonstrating a mouth-eyes trade-off between attention allocated to social cues coming from the eyes and linguistic cues coming from the mouth. We also found that children with higher word recognition proficiency and higher average pupil response had an increased likelihood of fixating the mouth, compared to the eyes and the rest of the screen, indicating stronger motivation to decode the speech.
2018-01-01
We investigated the effect of auditory noise added to speech on patterns of looking at faces in 40 toddlers. We hypothesised that noise would increase the difficulty of processing speech, making children allocate more attention to the mouth of the speaker to gain visual speech cues from mouth movements. We also hypothesised that this shift would cause a decrease in fixation time to the eyes, potentially decreasing the ability to monitor gaze. We found that adding noise increased the number of fixations to the mouth area, at the price of a decreased number of fixations to the eyes. Thus, to our knowledge, this is the first study demonstrating a mouth-eyes trade-off between attention allocated to social cues coming from the eyes and linguistic cues coming from the mouth. We also found that children with higher word recognition proficiency and higher average pupil response had an increased likelihood of fixating the mouth, compared to the eyes and the rest of the screen, indicating stronger motivation to decode the speech. PMID:29558514
Online Appointment Scheduling for a Nuclear Medicine Department in a Chinese Hospital
Feng, Ya-bing
2018-01-01
Nuclear medicine, a subspecialty of radiology, plays an important role in proper diagnosis and timely treatment. Multiple resources, especially short-lived radiopharmaceuticals involved in the process of nuclear medical examination, constitute a unique problem in appointment scheduling. Aiming at achieving scientific and reasonable appointment scheduling in the West China Hospital (WCH), a typical class A tertiary hospital in China, we developed an online appointment scheduling algorithm based on an offline nonlinear integer programming model which considers multiresources allocation, the time window constraints imposed by short-lived radiopharmaceuticals, and the stochastic nature of the patient requests when scheduling patients. A series of experiments are conducted to show the effectiveness of the proposed strategy based on data provided by the WCH. The results show that the examination amount increases by 29.76% compared with the current one with a significant increase in the resource utilization and timely rate. Besides, it also has a high stability for stochastic factors and bears the advantage of convenient and economic operation. PMID:29849748
Zolfaghari, Mohammad R; Peyghaleh, Elnaz
2015-03-01
This article presents a new methodology to implement the concept of equity in regional earthquake risk mitigation programs using an optimization framework. It presents a framework that could be used by decisionmakers (government and authorities) to structure budget allocation strategy toward different seismic risk mitigation measures, i.e., structural retrofitting for different building structural types in different locations and planning horizons. A two-stage stochastic model is developed here to seek optimal mitigation measures based on minimizing mitigation expenditures, reconstruction expenditures, and especially large losses in highly seismically active countries. To consider fairness in the distribution of financial resources among different groups of people, the equity concept is incorporated using constraints in model formulation. These constraints limit inequity to the user-defined level to achieve the equity-efficiency tradeoff in the decision-making process. To present practical application of the proposed model, it is applied to a pilot area in Tehran, the capital city of Iran. Building stocks, structural vulnerability functions, and regional seismic hazard characteristics are incorporated to compile a probabilistic seismic risk model for the pilot area. Results illustrate the variation of mitigation expenditures by location and structural type for buildings. These expenditures are sensitive to the amount of available budget and equity consideration for the constant risk aversion. Most significantly, equity is more easily achieved if the budget is unlimited. Conversely, increasing equity where the budget is limited decreases the efficiency. The risk-return tradeoff, equity-reconstruction expenditures tradeoff, and variation of per-capita expected earthquake loss in different income classes are also presented. © 2015 Society for Risk Analysis.
Modeling stochastic frontier based on vine copulas
NASA Astrophysics Data System (ADS)
Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito
2017-11-01
This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.
Stochastic simulation by image quilting of process-based geological models
NASA Astrophysics Data System (ADS)
Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef
2017-09-01
Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.
Mejlholm, Ole; Bøknæs, Niels; Dalgaard, Paw
2015-02-01
A new stochastic model for the simultaneous growth of Listeria monocytogenes and lactic acid bacteria (LAB) was developed and validated on data from naturally contaminated samples of cold-smoked Greenland halibut (CSGH) and cold-smoked salmon (CSS). During industrial processing these samples were added acetic and/or lactic acids. The stochastic model was developed from an existing deterministic model including the effect of 12 environmental parameters and microbial interaction (O. Mejlholm and P. Dalgaard, Food Microbiology, submitted for publication). Observed maximum population density (MPD) values of L. monocytogenes in naturally contaminated samples of CSGH and CSS were accurately predicted by the stochastic model based on measured variability in product characteristics and storage conditions. Results comparable to those from the stochastic model were obtained, when product characteristics of the least and most preserved sample of CSGH and CSS were used as input for the existing deterministic model. For both modelling approaches, it was shown that lag time and the effect of microbial interaction needs to be included to accurately predict MPD values of L. monocytogenes. Addition of organic acids to CSGH and CSS was confirmed as a suitable mitigation strategy against the risk of growth by L. monocytogenes as both types of products were in compliance with the EU regulation on ready-to-eat foods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effects of intrinsic stochasticity on delayed reaction-diffusion patterning systems.
Woolley, Thomas E; Baker, Ruth E; Gaffney, Eamonn A; Maini, Philip K; Seirin-Lee, Sungrim
2012-05-01
Cellular gene expression is a complex process involving many steps, including the transcription of DNA and translation of mRNA; hence the synthesis of proteins requires a considerable amount of time, from ten minutes to several hours. Since diffusion-driven instability has been observed to be sensitive to perturbations in kinetic delays, the application of Turing patterning mechanisms to the problem of producing spatially heterogeneous differential gene expression has been questioned. In deterministic systems a small delay in the reactions can cause a large increase in the time it takes a system to pattern. Recently, it has been observed that in undelayed systems intrinsic stochasticity can cause pattern initiation to occur earlier than in the analogous deterministic simulations. Here we are interested in adding both stochasticity and delays to Turing systems in order to assess whether stochasticity can reduce the patterning time scale in delayed Turing systems. As analytical insights to this problem are difficult to attain and often limited in their use, we focus on stochastically simulating delayed systems. We consider four different Turing systems and two different forms of delay. Our results are mixed and lead to the conclusion that, although the sensitivity to delays in the Turing mechanism is not completely removed by the addition of intrinsic noise, the effects of the delays are clearly ameliorated in certain specific cases.
Robustness of the non-Markovian Alzheimer walk under stochastic perturbation
NASA Astrophysics Data System (ADS)
Cressoni, J. C.; da Silva, L. R.; Viswanathan, G. M.; da Silva, M. A. A.
2012-12-01
The elephant walk model originally proposed by Schütz and Trimper to investigate non-Markovian processes led to the investigation of a series of other random-walk models. Of these, the best known is the Alzheimer walk model, because it was the first model shown to have amnestically induced persistence —i.e. superdiffusion caused by loss of memory. Here we study the robustness of the Alzheimer walk by adding a memoryless stochastic perturbation. Surprisingly, the solution of the perturbed model can be formally reduced to the solutions of the unperturbed model. Specifically, we give an exact solution of the perturbed model by finding a surjective mapping to the unperturbed model.
Chauvenet, Aliénor L M; Baxter, Peter W J; McDonald-Madden, Eve; Possingham, Hugh P
2010-04-01
Money is often a limiting factor in conservation, and attempting to conserve endangered species can be costly. Consequently, a framework for optimizing fiscally constrained conservation decisions for a single species is needed. In this paper we find the optimal budget allocation among isolated subpopulations of a threatened species to minimize local extinction probability. We solve the problem using stochastic dynamic programming, derive a useful and simple alternative guideline for allocating funds, and test its performance using forward simulation. The model considers subpopulations that persist in habitat patches of differing quality, which in our model is reflected in different relationships between money invested and extinction risk. We discover that, in most cases, subpopulations that are less efficient to manage should receive more money than those that are more efficient to manage, due to higher investment needed to reduce extinction risk. Our simple investment guideline performs almost as well as the exact optimal strategy. We illustrate our approach with a case study of the management of the Sumatran tiger, Panthera tigris sumatrae, in Kerinci Seblat National Park (KSNP), Indonesia. We find that different budgets should be allocated to the separate tiger subpopulations in KSNP. The subpopulation that is not at risk of extinction does not require any management investment. Based on the combination of risks of extinction and habitat quality, the optimal allocation for these particular tiger subpopulations is an unusual case: subpopulations that occur in higher-quality habitat (more efficient to manage) should receive more funds than the remaining subpopulation that is in lower-quality habitat. Because the yearly budget allocated to the KSNP for tiger conservation is small, to guarantee the persistence of all the subpopulations that are currently under threat we need to prioritize those that are easier to save. When allocating resources among subpopulations of a threatened species, the combined effects of differences in habitat quality, cost of action, and current subpopulation probability of extinction need to be integrated. We provide a useful guideline for allocating resources among isolated subpopulations of any threatened species.
Optimal route discovery for soft QOS provisioning in mobile ad hoc multimedia networks
NASA Astrophysics Data System (ADS)
Huang, Lei; Pan, Feng
2007-09-01
In this paper, we propose an optimal routing discovery algorithm for ad hoc multimedia networks whose resource keeps changing, First, we use stochastic models to measure the network resource availability, based on the information about the location and moving pattern of the nodes, as well as the link conditions between neighboring nodes. Then, for a certain multimedia packet flow to be transmitted from a source to a destination, we formulate the optimal soft-QoS provisioning problem as to find the best route that maximize the probability of satisfying its desired QoS requirements in terms of the maximum delay constraints. Based on the stochastic network resource model, we developed three approaches to solve the formulated problem: A centralized approach serving as the theoretical reference, a distributed approach that is more suitable to practical real-time deployment, and a distributed dynamic approach that utilizes the updated time information to optimize the routing for each individual packet. Examples of numerical results demonstrated that using the route discovered by our distributed algorithm in a changing network environment, multimedia applications could achieve better QoS statistically.
Stochasticity, succession, and environmental perturbations in a fluidic ecosystem
Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D.; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A.; Hazen, Terry C.; Tiedje, James M.; Arkin, Adam P.
2014-01-01
Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession. PMID:24550501
Modeling the lake eutrophication stochastic ecosystem and the research of its stability.
Wang, Bo; Qi, Qianqian
2018-06-01
In the reality, the lake system will be disturbed by stochastic factors including the external and internal factors. By adding the additive noise and the multiplicative noise to the right-hand sides of the model equation, the additive stochastic model and the multiplicative stochastic model are established respectively in order to reduce model errors induced by the absence of some physical processes. For both the two kinds of stochastic ecosystems, the authors studied the bifurcation characteristics with the FPK equation and the Lyapunov exponent method based on the Stratonovich-Khasminiskii stochastic average principle. Results show that, for the additive stochastic model, when control parameter (i.e., nutrient loading rate) falls into the interval [0.388644, 0.66003825], there exists bistability for the ecosystem and the additive noise intensities cannot make the bifurcation point drift. In the region of the bistability, the external stochastic disturbance which is one of the main triggers causing the lake eutrophication, may make the ecosystem unstable and induce a transition. When control parameter (nutrient loading rate) falls into the interval (0, 0.388644) and (0.66003825, 1.0), there only exists a stable equilibrium state and the additive noise intensity could not change it. For the multiplicative stochastic model, there exists more complex bifurcation performance and the multiplicative ecosystem will be broken by the multiplicative noise. Also, the multiplicative noise could reduce the extent of the bistable region, ultimately, the bistable region vanishes for sufficiently large noise. What's more, both the nutrient loading rate and the multiplicative noise will make the ecosystem have a regime shift. On the other hand, for the two kinds of stochastic ecosystems, the authors also discussed the evolution of the ecological variable in detail by using the Four-stage Runge-Kutta method of strong order γ=1.5. The numerical method was found to be capable of effectively explaining the regime shift theory and agreed with the realistic analyze. These conclusions also confirms the two paths for the system to move from one stable state to another proposed by Beisner et al. [3], which may help understand the occurrence mechanism related to the lake eutrophication from the view point of the stochastic model and mathematical analysis. Copyright © 2018 Elsevier Inc. All rights reserved.
Predator-prey model for the self-organization of stochastic oscillators in dual populations
NASA Astrophysics Data System (ADS)
Moradi, Sara; Anderson, Johan; Gürcan, Ozgür D.
2015-12-01
A predator-prey model of dual populations with stochastic oscillators is presented. A linear cross-coupling between the two populations is introduced following the coupling between the motions of a Wilberforce pendulum in two dimensions: one in the longitudinal and the other in torsional plain. Within each population a Kuramoto-type competition between the phases is assumed. Thus, the synchronization state of the whole system is controlled by these two types of competitions. The results of the numerical simulations show that by adding the linear cross-coupling interactions predator-prey oscillations between the two populations appear, which results in self-regulation of the system by a transfer of synchrony between the two populations. The model represents several important features of the dynamical interplay between the drift wave and zonal flow turbulence in magnetically confined plasmas, and a novel interpretation of the coupled dynamics of drift wave-zonal flow turbulence using synchronization of stochastic oscillator is discussed.
Allocations for HANDI 2000 business management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D.
The Data Integration 2000 Project will result in an integrated and comprehensive set of functional applications containing core information necessary to support the Project Hanford Management Contract. It is based on the Commercial-Off-The-Shelf product solution with commercially proven business processes. The COTS product solution set, of PassPort and People Soft software, supports finance, supply and chemical management/Material Safety Data Sheet, human resources. Allocations at Fluor Daniel Hanford are burdens added to base costs using a predetermined rate.
Recent Developments in the Code RITRACKS (Relativistic Ion Tracks)
NASA Technical Reports Server (NTRS)
Plante, Ianik; Ponomarev, Artem L.; Blattnig, Steve R.
2018-01-01
The code RITRACKS (Relativistic Ion Tracks) was developed to simulate detailed stochastic radiation track structures of ions of different types and energies. Many new capabilities were added to the code during the recent years. Several options were added to specify the times at which the tracks appear in the irradiated volume, allowing the simulation of dose-rate effects. The code has been used to simulate energy deposition in several targets: spherical, ellipsoidal and cylindrical. More recently, density changes as well as a spherical shell were implemented for spherical targets, in order to simulate energy deposition in walled tissue equivalent proportional counters. RITRACKS is used as a part of the new program BDSTracks (Biological Damage by Stochastic Tracks) to simulate several types of chromosome aberrations in various irradiation conditions. The simulation of damage to various DNA structures (linear and chromatin fiber) by direct and indirect effects has been improved and is ongoing. Many improvements were also made to the graphic user interface (GUI), including the addition of several labels allowing changes of units. A new GUI has been added to display the electron ejection vectors. The parallel calculation capabilities, notably the pre- and post-simulation processing on Windows and Linux machines have been reviewed to make them more portable between different systems. The calculation part is currently maintained in an Atlassian Stash® repository for code tracking and possibly future collaboration.
Inverse Statistics and Asset Allocation Efficiency
NASA Astrophysics Data System (ADS)
Bolgorian, Meysam
In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.
A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems.
Wang, Lujia; Liu, Ming; Meng, Max Q-H
2017-02-01
Cloud computing enables users to share computing resources on-demand. The cloud computing framework cannot be directly mapped to cloud robotic systems with ad hoc networks since cloud robotic systems have additional constraints such as limited bandwidth and dynamic structure. However, most multirobotic applications with cooperative control adopt this decentralized approach to avoid a single point of failure. Robots need to continuously update intensive data to execute tasks in a coordinated manner, which implies real-time requirements. Thus, a resource allocation strategy is required, especially in such resource-constrained environments. This paper proposes a hierarchical auction-based mechanism, namely link quality matrix (LQM) auction, which is suitable for ad hoc networks by introducing a link quality indicator. The proposed algorithm produces a fast and robust method that is accurate and scalable. It reduces both global communication and unnecessary repeated computation. The proposed method is designed for firm real-time resource retrieval for physical multirobot systems. A joint surveillance scenario empirically validates the proposed mechanism by assessing several practical metrics. The results show that the proposed LQM auction outperforms state-of-the-art algorithms for resource allocation.
Pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Wu, Shaochuan; Tan, Xuezhi
2007-11-01
By analyzing all kinds of address configuration algorithms, this paper provides a new pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks. Based on PRDAC, the first node that initials this network randomly chooses a nonlinear shift register that can generates an m-sequence. When another node joins this network, the initial node will act as an IP address configuration sever to compute an IP address according to this nonlinear shift register, and then allocates this address and tell the generator polynomial of this shift register to this new node. By this means, when other node joins this network, any node that has obtained an IP address can act as a server to allocate address to this new node. PRDAC can also efficiently avoid IP conflicts and deal with network partition and merge as same as prophet address (PA) allocation and dynamic configuration and distribution protocol (DCDP). Furthermore, PRDAC has less algorithm complexity, less computational complexity and more sufficient assumption than PA. In addition, PRDAC radically avoids address conflicts and maximizes the utilization rate of IP addresses. Analysis and simulation results show that PRDAC has rapid convergence, low overhead and immune from topological structures.
Stochastic resonance therapy induces increased movement related caudate nucleus activity.
Kaut, Oliver; Becker, Benjamin; Schneider, Christine; Zhou, Feng; Fliessbach, Klaus; Hurlemann, René; Wüllner, Ullrich
2016-10-12
Whole-body vibration can be used to supplement canonical physical treatment. It is performed while probands stand on a vibrating platform. Therapeutic vibration can be generated as a stochastic vibratory pattern, referred to as stochastic resonance whole-body vibration (SR-WBV). Despite the widespread use of SR-WBV its neurophysiological mechanism is unclear. A randomized sham-controlled double-blinded trial was performed as a pilot study. The experimental group received 6 cycles of SR-WBV at a frequency of 7 Hz with the SR-Zeptor device, and the sham group received the same treatment at a frequency of 1 Hz. At baseline 1.5 T functional magnetic resonance imaging (fMRI) was performed in the resting state, together with a finger/foot tapping test. A second fMRI was carried out after SR-WBV as sham treatment in both groups. Subsequently, a second cycle of SR-WBV was performed as sham or verum with consecutive fMRI, followed by a final fMRI on day 2. Nineteen healthy volunteers were allocated to the experimental or sham group, respectively. Analyses of specific effects revealed a significant treatment × time interaction effect (p < 0.05, small-volume corrected (SVC FWE-corrected)) in the left caudate nucleus during intermediate difficulty when comparing pre- vs post-SR-WBV treatment in the verum group. This proof-of-concept study suggests the existence of cerebral effects of SR-WBV.
Space-Time Processing for Tactical Mobile Ad Hoc Networks
2007-08-01
rates in mobile ad hoc networks. In addition, he has considered the design of a cross-layer multi-user resource allocation framework using a... framework for many-to-one communication. In this context, multiple nodes cooperate to transmit their packets simultaneously to a single node using multi...spatially multiplexed signals transmitted from multiple nodes. Our goal is to form a framework that activates different sets of communication links
Contrarian behavior in a complex adaptive system
NASA Astrophysics Data System (ADS)
Liang, Y.; An, K. N.; Yang, G.; Huang, J. P.
2013-01-01
Contrarian behavior is a kind of self-organization in complex adaptive systems (CASs). Here we report the existence of a transition point in a model resource-allocation CAS with contrarian behavior by using human experiments, computer simulations, and theoretical analysis. The resource ratio and system predictability serve as the tuning parameter and order parameter, respectively. The transition point helps to reveal the positive or negative role of contrarian behavior. This finding is in contrast to the common belief that contrarian behavior always has a positive role in resource allocation, say, stabilizing resource allocation by shrinking the redundancy or the lack of resources. It is further shown that resource allocation can be optimized at the transition point by adding an appropriate size of contrarians. This work is also expected to be of value to some other fields ranging from management and social science to ecology and evolution.
An 'adding' algorithm for the Markov chain formalism for radiation transfer
NASA Technical Reports Server (NTRS)
Esposito, L. W.
1979-01-01
An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.
Linear Quadratic Tracking Design for a Generic Transport Aircraft with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Burken, John J.; Frost, Susan A.; Taylor, Brian R.
2011-01-01
When designing control laws for systems with constraints added to the tracking performance, control allocation methods can be utilized. Control allocations methods are used when there are more command inputs than controlled variables. Constraints that require allocators are such task as; surface saturation limits, structural load limits, drag reduction constraints or actuator failures. Most transport aircraft have many actuated surfaces compared to the three controlled variables (such as angle of attack, roll rate & angle of side slip). To distribute the control effort among the redundant set of actuators a fixed mixer approach can be utilized or online control allocation techniques. The benefit of an online allocator is that constraints can be considered in the design whereas the fixed mixer cannot. However, an online control allocator mixer has a disadvantage of not guaranteeing a surface schedule, which can then produce ill defined loads on the aircraft. The load uncertainty and complexity has prevented some controller designs from using advanced allocation techniques. This paper considers actuator redundancy management for a class of over actuated systems with real-time structural load limits using linear quadratic tracking applied to the generic transport model. A roll maneuver example of an artificial load limit constraint is shown and compared to the same no load limitation maneuver.
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Engelund Holm, Peter; Trapp, Stefan; Rosbjerg, Dan; Bauer-Gottwein, Peter
2015-04-01
Few studies address water quality in hydro-economic models, which often focus primarily on optimal allocation of water quantities. Water quality and water quantity are closely coupled, and optimal management with focus solely on either quantity or quality may cause large costs in terms of the oth-er component. In this study, we couple water quality and water quantity in a joint hydro-economic catchment-scale optimization problem. Stochastic dynamic programming (SDP) is used to minimize the basin-wide total costs arising from water allocation, water curtailment and water treatment. The simple water quality module can handle conservative pollutants, first order depletion and non-linear reactions. For demonstration purposes, we model pollutant releases as biochemical oxygen demand (BOD) and use the Streeter-Phelps equation for oxygen deficit to compute the resulting min-imum dissolved oxygen concentrations. Inelastic water demands, fixed water allocation curtailment costs and fixed wastewater treatment costs (before and after use) are estimated for the water users (agriculture, industry and domestic). If the BOD concentration exceeds a given user pollution thresh-old, the user will need to pay for pre-treatment of the water before use. Similarly, treatment of the return flow can reduce the BOD load to the river. A traditional SDP approach is used to solve one-step-ahead sub-problems for all combinations of discrete reservoir storage, Markov Chain inflow clas-ses and monthly time steps. Pollution concentration nodes are introduced for each user group and untreated return flow from the users contribute to increased BOD concentrations in the river. The pollutant concentrations in each node depend on multiple decision variables (allocation and wastewater treatment) rendering the objective function non-linear. Therefore, the pollution concen-tration decisions are outsourced to a genetic algorithm, which calls a linear program to determine the remainder of the decision variables. This hybrid formulation keeps the optimization problem computationally feasible and represents a flexible and customizable method. The method has been applied to the Ziya River basin, an economic hotspot located on the North China Plain in Northern China. The basin is subject to severe water scarcity, and the rivers are heavily polluted with wastewater and nutrients from diffuse sources. The coupled hydro-economic optimiza-tion model can be used to assess costs of meeting additional constraints such as minimum water qual-ity or to economically prioritize investments in waste water treatment facilities based on economic criteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuyang; Zhang, Qichun; Wang, Hong
To enhance the performance of the tracking property , this paper presents a novel control algorithm for a class of linear dynamic stochastic systems with unmeasurable states, where the performance enhancement loop is established based on Kalman filter. Without changing the existing closed loop with the PI controller, the compensative controller is designed to minimize the variances of the tracking errors using the estimated states and the propagation of state variances. Moreover, the stability of the closed-loop systems has been analyzed in the mean-square sense. A simulated example is included to show the effectiveness of the presented control algorithm, wheremore » encouraging results have been obtained.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less
Data Analysis Approaches for the Risk-Informed Safety Margins Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Alfonsi, Andrea; Maljovec, Daniel P.
2016-09-01
In the past decades, several numerical simulation codes have been employed to simulate accident dynamics (e.g., RELAP5-3D, RELAP-7, MELCOR, MAAP). In order to evaluate the impact of uncertainties into accident dynamics, several stochastic methodologies have been coupled with these codes. These stochastic methods range from classical Monte-Carlo and Latin Hypercube sampling to stochastic polynomial methods. Similar approaches have been introduced into the risk and safety community where stochastic methods (such as RAVEN, ADAPT, MCDET, ADS) have been coupled with safety analysis codes in order to evaluate the safety impact of timing and sequencing of events. These approaches are usually calledmore » Dynamic PRA or simulation-based PRA methods. These uncertainties and safety methods usually generate a large number of simulation runs (database storage may be on the order of gigabytes or higher). The scope of this paper is to present a broad overview of methods and algorithms that can be used to analyze and extract information from large data sets containing time dependent data. In this context, “extracting information” means constructing input-output correlations, finding commonalities, and identifying outliers. Some of the algorithms presented here have been developed or are under development within the RAVEN statistical framework.« less
Analysis of the Space Propulsion System Problem Using RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
diego mandelli; curtis smith; cristian rabiti
This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less
NASA Astrophysics Data System (ADS)
Razurel, Pierre; Niayifar, Amin; Perona, Paolo
2017-04-01
Hydropower plays an important role in supplying worldwide energy demand where it contributes to approximately 16% of global electricity production. Although hydropower, as an emission-free renewable energy, is a reliable source of energy to mitigate climate change, its development will increase river exploitation. The environmental impacts associated with both small hydropower plants (SHP) and traditional dammed systems have been found to the consequence of changing natural flow regime with other release policies, e.g. the minimal flow. Nowadays, in some countries, proportional allocation rules are also applied aiming to mimic the natural flow variability. For example, these dynamic rules are part of the environmental guidance in the United Kingdom and constitute an improvement in comparison to static rules. In a context in which the full hydropower potential might be reached in a close future, a solution to optimize the water allocation seems essential. In this work, we present a model that enables to simulate a wide range of water allocation rules (static and dynamic) for a specific hydropower plant and to evaluate their associated economic and ecological benefits. It is developed in the form of a graphical user interface (GUI) where, depending on the specific type of hydropower plant (i.e., SHP or traditional dammed system), the user is able to specify the different characteristics (e.g., hydrological data and turbine characteristics) of the studied system. As an alternative to commonly used policies, a new class of dynamic allocation functions (non-proportional repartition rules) is introduced (e.g., Razurel et al., 2016). The efficiency plot resulting from the simulations shows the environmental indicator and the energy produced for each allocation policies. The optimal water distribution rules can be identified on the Pareto's frontier, which is obtained by stochastic optimization in the case of storage systems (e.g., Niayifar and Perona, submitted) and by direct simulation for small hydropower ones (Razurel et al., 2016). Compared to proportional and constant minimal flows, economic and ecological efficiencies are found to be substantially improved in the case of using non-proportional water allocation rules for both SHP and traditional systems.
Validation of ACG Case-mix for equitable resource allocation in Swedish primary health care.
Zielinski, Andrzej; Kronogård, Maria; Lenhoff, Håkan; Halling, Anders
2009-09-18
Adequate resource allocation is an important factor to ensure equity in health care. Previous reimbursement models have been based on age, gender and socioeconomic factors. An explanatory model based on individual need of primary health care (PHC) has not yet been used in Sweden to allocate resources. The aim of this study was to examine to what extent the ACG case-mix system could explain concurrent costs in Swedish PHC. Diagnoses were obtained from electronic PHC records of inhabitants in Blekinge County (approx. 150,000) listed with public PHC (approx. 120,000) for three consecutive years, 2004-2006. The inhabitants were then classified into six different resource utilization bands (RUB) using the ACG case-mix system. The mean costs for primary health care were calculated for each RUB and year. Using linear regression models and log-cost as dependent variable the adjusted R2 was calculated in the unadjusted model (gender) and in consecutive models where age, listing with specific PHC and RUB were added. In an additional model the ACG groups were added. Gender, age and listing with specific PHC explained 14.48-14.88% of the variance in individual costs for PHC. By also adding information on level of co-morbidity, as measured by the ACG case-mix system, to specific PHC the adjusted R2 increased to 60.89-63.41%. The ACG case-mix system explains patient costs in primary care to a high degree. Age and gender are important explanatory factors, but most of the variance in concurrent patient costs was explained by the ACG case-mix system.
1978-04-15
12. (Part 2 of 2) 70 B1 Calculate Revised I! Allocation Error [: Estimates For Each Attribute Category] Skip Change of tltipliers - No _ D-Do For All A...passing Onl to tilt next target , thlt current Value Of the target weight is revised . Altecr every two to four targets , the Laigrange multipliers art...delete a weapon, a new set of variables is delivered by WADOUT, and STALL uses this revised in- formation to decide whether more weapons should be added
Optimal Financial Knowledge and Wealth Inequality*
Lusardi, Annamaria; Michaud, Pierre-Carl; Mitchell, Olivia S.
2017-01-01
We show that financial knowledge is a key determinant of wealth inequality in a stochastic lifecycle model with endogenous financial knowledge accumulation, where financial knowledge enables individuals to better allocate lifetime resources in a world of uncertainty and imperfect insurance. Moreover, because of how the U.S. social insurance system works, better-educated individuals have most to gain from investing in financial knowledge. Our parsimonious specification generates substantial wealth inequality relative to a one-asset saving model and one where returns on wealth depend on portfolio composition alone. We estimate that 30–40 percent of retirement wealth inequality is accounted for by financial knowledge. PMID:28555088
White, K.P.; Langley, J.A.; Cahoon, D.R.; Megonigal, J.P.
2012-01-01
Plants alter biomass allocation to optimize resource capture. Plant strategy for resource capture may have important implications in intertidal marshes, where soil nitrogen (N) levels and atmospheric carbon dioxide (CO2) are changing. We conducted a factorial manipulation of atmospheric CO2 (ambient and ambient + 340 ppm) and soil N (ambient and ambient + 25 g m-2 year-1) in an intertidal marsh composed of common North Atlantic C3 and C4 species. Estimation of C3 stem turnover was used to adjust aboveground C3 productivity, and fine root productivity was partitioned into C3-C4 functional groups by isotopic analysis. The results suggest that the plants follow resource capture theory. The C3 species increased aboveground productivity under the added N and elevated CO2 treatment (P 2 alone. C3 fine root production decreased with added N (P 2 (P = 0.0481). The C4 species increased growth under high N availability both above- and belowground, but that stimulation was diminished under elevated CO2. The results suggest that the marsh vegetation allocates biomass according to resource capture at the individual plant level rather than for optimal ecosystem viability in regards to biomass influence over the processes that maintain soil surface elevation in equilibrium with sea level.
multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows
NASA Astrophysics Data System (ADS)
Turnquist, Brian; Owkes, Mark
2017-11-01
Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2016-04-01
This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).
A stochastic electricity market clearing formulation with consistent pricing properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavala, Victor M.; Kim, Kibaek; Anitescu, Mihai
We argue that deterministic market clearing formulations introduce arbitrary distortions between day-ahead and expected real-time prices that bias economic incentives. We extend and analyze a previously proposed stochastic clearing formulation in which the social surplus function induces penalties between day-ahead and real-time quantities. We prove that the formulation yields price bounded price distortions, and we show that adding a similar penalty term to transmission flows and phase angles ensures boundedness throughout the network. We prove that when the price distortions are zero, day-ahead quantities equal a quantile of their real-time counterparts. The undesired effects of price distortions suggest that stochasticmore » settings provide significant benefits over deterministic ones that go beyond social surplus improvements. Finally, we propose additional metrics to evaluate these benefits.« less
NASA Astrophysics Data System (ADS)
Shirata, Kento; Inden, Yuki; Kasai, Seiya; Oya, Takahide; Hagiwara, Yosuke; Kaeriyama, Shunichi; Nakamura, Hideyuki
2016-04-01
We investigated the robust detection of surface electromyogram (EMG) signals based on the stochastic resonance (SR) phenomenon, in which the response to weak signals is optimized by adding noise, combined with multiple surface electrodes. Flexible carbon nanotube composite paper (CNT-cp) was applied to the surface electrode, which showed good performance that is comparable to that of conventional Ag/AgCl electrodes. The SR-based EMG signal system integrating an 8-Schmitt-trigger network and the multiple-CNT-cp-electrode array successfully detected weak EMG signals even when the subject’s body is in the motion, which was difficult to achieve using the conventional technique. The feasibility of the SR-based EMG detection technique was confirmed by demonstrating its applicability to robot hand control.
A stochastic electricity market clearing formulation with consistent pricing properties
Zavala, Victor M.; Kim, Kibaek; Anitescu, Mihai; ...
2017-03-16
We argue that deterministic market clearing formulations introduce arbitrary distortions between day-ahead and expected real-time prices that bias economic incentives. We extend and analyze a previously proposed stochastic clearing formulation in which the social surplus function induces penalties between day-ahead and real-time quantities. We prove that the formulation yields price bounded price distortions, and we show that adding a similar penalty term to transmission flows and phase angles ensures boundedness throughout the network. We prove that when the price distortions are zero, day-ahead quantities equal a quantile of their real-time counterparts. The undesired effects of price distortions suggest that stochasticmore » settings provide significant benefits over deterministic ones that go beyond social surplus improvements. Finally, we propose additional metrics to evaluate these benefits.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
... allocable to a particular cost objective (i.e., a specific function, project, process, or organization) if...) Direct materials. (4) Other direct costs. (5) Processing materials and chemicals. (6) Power and other... equipment. (10) Added factor includes general and administrative costs and other support costs that are...
A stochastic equilibrium model for the North American natural gas market
NASA Astrophysics Data System (ADS)
Zhuang, Jifang
This dissertation is an endeavor in the field of energy modeling for the North American natural gas market using a mixed complementarity formulation combined with the stochastic programming. The genesis of the stochastic equilibrium model presented in this dissertation is the deterministic market equilibrium model developed in [Gabriel, Kiet and Zhuang, 2005]. Based on some improvements that we made to this model, including proving new existence and uniqueness results, we present a multistage stochastic equilibrium model with uncertain demand for the deregulated North American natural gas market using the recourse method of the stochastic programming. The market participants considered by the model are pipeline operators, producers, storage operators, peak gas operators, marketers and consumers. Pipeline operators are described with regulated tariffs but also involve "congestion pricing" as a mechanism to allocate scarce pipeline capacity. Marketers are modeled as Nash-Cournot players in sales to the residential and commercial sectors but price-takers in all other aspects. Consumers are represented by demand functions in the marketers' problem. Producers, storage operators and peak gas operators are price-takers consistent with perfect competition. Also, two types of the natural gas markets are included: the long-term and spot markets. Market participants make both high-level planning decisions (first-stage decisions) in the long-term market and daily operational decisions (recourse decisions) in the spot market subject to their engineering, resource and political constraints, resource constraints as well as market constraints on both the demand and the supply side, so as to simultaneously maximize their expected profits given others' decisions. The model is shown to be an instance of a mixed complementarity problem (MiCP) under minor conditions. The MiCP formulation is derived from applying the Karush-Kuhn-Tucker optimality conditions of the optimization problems faced by the market participants. Some theoretical results regarding the market prices in both markets are shown. We also illustrate the model on a representative, sample network of two production nodes, two consumption nodes with discretely distributed end-user demand and three seasons using four cases.
Jenouvrier, Stéphanie; Holland, Marika; Stroeve, Julienne; Barbraud, Christophe; Weimerskirch, Henri; Serreze, Mark; Caswell, Hal
2012-09-01
Sea ice conditions in the Antarctic affect the life cycle of the emperor penguin (Aptenodytes forsteri). We present a population projection for the emperor penguin population of Terre Adélie, Antarctica, by linking demographic models (stage-structured, seasonal, nonlinear, two-sex matrix population models) to sea ice forecasts from an ensemble of IPCC climate models. Based on maximum likelihood capture-mark-recapture analysis, we find that seasonal sea ice concentration anomalies (SICa ) affect adult survival and breeding success. Demographic models show that both deterministic and stochastic population growth rates are maximized at intermediate values of annual SICa , because neither the complete absence of sea ice, nor heavy and persistent sea ice, would provide satisfactory conditions for the emperor penguin. We show that under some conditions the stochastic growth rate is positively affected by the variance in SICa . We identify an ensemble of five general circulation climate models whose output closely matches the historical record of sea ice concentration in Terre Adélie. The output of this ensemble is used to produce stochastic forecasts of SICa , which in turn drive the population model. Uncertainty is included by incorporating multiple climate models and by a parametric bootstrap procedure that includes parameter uncertainty due to both model selection and estimation error. The median of these simulations predicts a decline of the Terre Adélie emperor penguin population of 81% by the year 2100. We find a 43% chance of an even greater decline, of 90% or more. The uncertainty in population projections reflects large differences among climate models in their forecasts of future sea ice conditions. One such model predicts population increases over much of the century, but overall, the ensemble of models predicts that population declines are far more likely than population increases. We conclude that climate change is a significant risk for the emperor penguin. Our analytical approach, in which demographic models are linked to IPCC climate models, is powerful and generally applicable to other species and systems. © 2012 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Wakazuki, Yasutaka; Hara, Masayuki; Fujita, Mikiko; Ma, Xieyao; Kimura, Fujio
2013-04-01
Regional scale climate change projections play an important role in assessments of influences of global warming and include statistical (SD) and dynamical downscaling (DD) approaches. One of DD methods is developed basing on the pseudo-global-warming (PGW) method developed by Kimura and Kitoh (2007) in this study. In general, DD uses regional climate model (RCM) with lateral boundary data. In PGW method, the climatological mean difference estimated by GCMs are added to the objective analysis data (ANAL), and the data are used as the lateral boundary data in the future climate simulations. The ANAL is also used as the lateral boundary conditions of the present climate simulation. One of merits of the PGW method is that influences of biases of GCMs in RCM simulations are reduced. However, the PGW method does not treat climate changes in relative humidity, year-to-year variation, and short-term disturbances. The developing new downscaling method is named as the incremental dynamical downscaling and analysis system (InDDAS). The InDDAS treat climate changes in relative humidity and year-to-year variations. On the other hand, uncertainties of climate change projections estimated by many GCMs are large and are not negligible. Thus, stochastic regional scale climate change projections are expected for assessments of influences of global warming. Many RCM runs must be performed to make stochastic information. However, the computational costs are huge because grid size of RCM runs should be small to resolve heavy rainfall phenomena. Therefore, the number of runs to make stochastic information must be reduced. In InDDAS, climatological differences added to ANAL become statistically pre-analyzed information. The climatological differences of many GCMs are divided into mean climatological difference (MD) and departures from MD. The departures are analyzed by principal component analysis, and positive and negative perturbations (positive and negative standard deviations multiplied by departure patterns (eigenvectors)) with multi modes are added to MD. Consequently, the most likely future states are calculated with climatological difference of MD. For example, future states in cases that temperature increase is large and small are calculated with MD plus positive and negative perturbations of the first mode.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Aljuaidi, A. E.; Kaluarachchi, J. J.
2009-12-01
We include demands for water of different salinity concentrations as input parameters and decision variables in a regional hydro-economic optimization model. This specification includes separate demand functions for saline water. We then use stochastic non-linear programming to jointly identify the benefit maximizing set of infrastructure expansions, operational allocations, and use of different water quality types under climate variability. We present a detailed application for the Gaza Strip. The application considers building desalination and waste-water treatment plants and conveyance pipelines, initiating water conservation and leak reduction programs, plus allocating and transferring water of different qualities among agricultural, industrial, and urban sectors and among districts. Results show how to integrate a mix of supply enhancement, conservation, water quality improvement, and water quality management actions into a portfolio that can economically and efficiently respond to changes and uncertainties in surface and groundwater availability due to climate variability. We also show how to put drawn-down and saline Gaza aquifer water to more sustainable and economical use.
NASA Astrophysics Data System (ADS)
Moulds, S.; Buytaert, W.; Mijic, A.
2015-10-01
We present the lulcc software package, an object-oriented framework for land use change modelling written in the R programming language. The contribution of the work is to resolve the following limitations associated with the current land use change modelling paradigm: (1) the source code for model implementations is frequently unavailable, severely compromising the reproducibility of scientific results and making it impossible for members of the community to improve or adapt models for their own purposes; (2) ensemble experiments to capture model structural uncertainty are difficult because of fundamental differences between implementations of alternative models; and (3) additional software is required because existing applications frequently perform only the spatial allocation of change. The package includes a stochastic ordered allocation procedure as well as an implementation of the CLUE-S algorithm. We demonstrate its functionality by simulating land use change at the Plum Island Ecosystems site, using a data set included with the package. It is envisaged that lulcc will enable future model development and comparison within an open environment.
Rethinking "normal": The role of stochasticity in the phenology of a synchronously breeding seabird.
Youngflesh, Casey; Jenouvrier, Stephanie; Hinke, Jefferson T; DuBois, Lauren; St Leger, Judy; Trivelpiece, Wayne Z; Trivelpiece, Susan G; Lynch, Heather J
2018-05-01
Phenological changes have been observed in a variety of systems over the past century. There is concern that, as a consequence, ecological interactions are becoming increasingly mismatched in time, with negative consequences for ecological function. Significant spatial heterogeneity (inter-site) and temporal variability (inter-annual) can make it difficult to separate intrinsic, extrinsic and stochastic drivers of phenological variability. The goal of this study was to understand the timing and variability in breeding phenology of Adélie penguins under fixed environmental conditions and to use those data to identify a "null model" appropriate for disentangling the sources of variation in wild populations. Data on clutch initiation were collected from both wild and captive populations of Adélie penguins. Clutch initiation in the captive population was modelled as a function of year, individual and age to better understand phenological patterns observed in the wild population. Captive populations displayed as much inter-annual variability in breeding phenology as wild populations, suggesting that variability in breeding phenology is the norm and thus may be an unreliable indicator of environmental forcing. The distribution of clutch initiation dates was found to be moderately asymmetric (right skewed) both in the wild and in captivity, consistent with the pattern expected under social facilitation. The role of stochasticity in phenological processes has heretofore been largely ignored. However, these results suggest that inter-annual variability in breeding phenology can arise independent of any environmental or demographic drivers and that synchronous breeding can enhance inherent stochasticity. This complicates efforts to relate phenological variation to environmental variability in the wild. Accordingly, we must be careful to consider random forcing in phenological processes, lest we fit models to data dominated by random noise. This is particularly true for colonial species where breeding synchrony may outweigh each individual's effort to time breeding with optimal environmental conditions. Our study highlights the importance of identifying appropriate null models for studying phenology. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.
77 FR 45591 - Pacific Fishery Management Council; Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-01
... Fishery Management Council; Public Meeting AGENCY: National Marine Fisheries Service (NMFS), National... Pacific Fishery Management Council's (Pacific Council) Ad Hoc South of Humbug Pacific Halibut Workgroup..., monitoring, and allocation history of Pacific halibut in the area south of Humbug Mt. DATES: The conference...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less
Jingyi, Zhu
2015-01-01
The detecting mechanism of carbon nanotubes gas sensor based on multi-stable stochastic resonance (MSR) model was studied in this paper. A numerically stimulating model based on MSR was established. And gas-ionizing experiment by adding electronic white noise to induce 1.65 MHz periodic component in the carbon nanotubes gas sensor was performed. It was found that the signal-to-noise ratio (SNR) spectrum displayed 2 maximal values, which accorded to the change of the broken-line potential function. The experimental results of gas-ionizing experiment demonstrated that periodic component of 1.65 MHz had multiple MSR phenomena, which was in accordance with the numerical stimulation results. In this way, the numerical stimulation method provides an innovative method for the detecting mechanism research of carbon nanotubes gas sensor.
A fault-tolerant small world topology control model in ad hoc networks for search and rescue
NASA Astrophysics Data System (ADS)
Tan, Mian; Fang, Ling; Wu, Yue; Zhang, Bo; Chang, Bowen; Holme, Petter; Zhao, Jing
2018-02-01
Due to their self-organized, multi-hop and distributed characteristics, ad hoc networks are useful in search and rescue. Topology control models need to be designed for energy-efficient, robust and fast communication in ad hoc networks. This paper proposes a topology control model which specializes for search and rescue-Compensation Small World-Repeated Game (CSWRG)-which integrates mobility models, constructing small world networks and a game-theoretic approach to the allocation of resources. Simulation results show that our mobility models can enhance the communication performance of the constructed small-world networks. Our strategy, based on repeated game, can suppress selfish behavior and compensate agents that encounter selfish or faulty neighbors. This model could be useful for the design of ad hoc communication networks.
Multiscale Hy3S: hybrid stochastic simulation for supercomputers.
Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2006-02-24
Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.
Dynamic versus static allocation policies in multipurpose multireservoir systems
NASA Astrophysics Data System (ADS)
Tilmant, A.; Goor, Q.; Pinte, D.; van der Zaag, P.
2007-12-01
As the competition for water is likely to increase in the near future due to socioeconomic development and population growth, water resources managers will face hard choices when allocating water between competing users. Because water is a vital resource used in multiple sectors, including the environment, the allocation is inherently a political and social process, which is likely to become increasingly scrutinized as the competition grows between the different sectors. Since markets are usually absent or ineffective, the allocation of water between competing demands is achieved administratively taking into account key objectives such as economic efficiency, equity and maintaining the ecological integrity. When crop irrigation is involved, water is usually allocated by a system of annual rights to use a fixed, static, volume of water. In a fully-allocated basin, moving from a static to a dynamic allocation process, whereby the policies are regularly updated according to the hydrologic status of the river basin, is the first step towards the development of river basin management strategies that increase the productivity of water. More specifically, in a multipurpose multireservoir system, continuously adjusting release and withdrawal decisions based on the latest hydrologic information will increase the benefits derived from the system. However, the extent to which such an adjustment can be achieved results from complex spatial and temporal interactions between the physical characteristics of the water resources system (storage, natural flows), the economic and social consequences of rationing and the impacts on natural ecosystems. The complexity of the decision-making process, which requires the continuous evaluation of numerous trade-offs, calls for the use of integrated hydrologic-economic models. This paper compares static and dynamic management approaches for a cascade of hydropower-irrigation reservoirs using stochastic dual dynamic programming (SDDP) formulations. As its name indicates, SDDP is an extension of SDP that removes the curse of dimensionality found in discrete SDP and can therefore be used to analyze large-scale water resources systems. For the static approach, the multiobjective (irrigation-hydropower) optimization problem is solved using the constraint method, i.e. net benefits from hydropower generation are maximized and irrigation water withdrawals are additional constraints. In the dynamic approach, the SDDP model seeks to maximize the net benefits of both hydropower and irrigation crop production. A cascade of 8 reservoirs in the Turkish and Syrian parts of the Euphrates river basin is used as a case study.
New “Tau-Leap” Strategy for Accelerated Stochastic Simulation
2015-01-01
The “Tau-Leap” strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev’s inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev’s inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. (J. Chem. Phys.2006, 124, 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys.2004, 121, 10356; Chatterjee et al. J. Chem. Phys.2005, 122, 024112; Peng et al. J. Chem. Phys.2007, 126, 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys.2001, 115, 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance. PMID:25620846
New "Tau-Leap" Strategy for Accelerated Stochastic Simulation.
Ramkrishna, Doraiswami; Shu, Che-Chi; Tran, Vu
2014-12-10
The "Tau-Leap" strategy for stochastic simulations of chemical reaction systems due to Gillespie and co-workers has had considerable impact on various applications. This strategy is reexamined with Chebyshev's inequality for random variables as it provides a rigorous probabilistic basis for a measured τ-leap thus adding significantly to simulation efficiency. It is also shown that existing strategies for simulation times have no probabilistic assurance that they satisfy the τ-leap criterion while the use of Chebyshev's inequality leads to a specified degree of certainty with which the τ-leap criterion is satisfied. This reduces the loss of sample paths which do not comply with the τ-leap criterion. The performance of the present algorithm is assessed, with respect to one discussed by Cao et al. ( J. Chem. Phys. 2006 , 124 , 044109), a second pertaining to binomial leap (Tian and Burrage J. Chem. Phys. 2004 , 121 , 10356; Chatterjee et al. J. Chem. Phys. 2005 , 122 , 024112; Peng et al. J. Chem. Phys. 2007 , 126 , 224109), and a third regarding the midpoint Poisson leap (Peng et al., 2007; Gillespie J. Chem. Phys. 2001 , 115 , 1716). The performance assessment is made by estimating the error in the histogram measured against that obtained with the so-called stochastic simulation algorithm. It is shown that the current algorithm displays notably less histogram error than its predecessor for a fixed computation time and, conversely, less computation time for a fixed accuracy. This computational advantage is an asset in repetitive calculations essential for modeling stochastic systems. The importance of stochastic simulations is derived from diverse areas of application in physical and biological sciences, process systems, and economics, etc. Computational improvements such as those reported herein are therefore of considerable significance.
Stochastic layer scaling in the two-wire model for divertor tokamaks
NASA Astrophysics Data System (ADS)
Ali, Halima; Punjabi, Alkesh; Boozer, Allen
2009-06-01
The question of magnetic field structure in the vicinity of the separatrix in divertor tokamaks is studied. The authors have investigated this problem earlier in a series of papers, using various mathematical techniques. In the present paper, the two-wire model (TWM) [Reiman, A. 1996 Phys. Plasmas 3, 906] is considered. It is noted that, in the TWM, it is useful to consider an extra equation expressing magnetic flux conservation. This equation does not add any more information to the TWM, since the equation is derived from the TWM. This equation is useful for controlling the step size in the numerical integration of the TWM equations. The TWM with the extra equation is called the flux-preserving TWM. Nevertheless, the technique is apparently still plagued by numerical inaccuracies when the perturbation level is low, resulting in an incorrect scaling of the stochastic layer width. The stochastic broadening of the separatrix in the flux-preserving TWM is compared with that in the low mn (poloidal mode number m and toroidal mode number n) map (LMN) [Ali, H., Punjabi, A., Boozer, A. and Evans, T. 2004 Phys. Plasmas 11, 1908]. The flux-preserving TWM and LMN both give Boozer-Rechester 0.5 power scaling of the stochastic layer width with the amplitude of magnetic perturbation when the perturbation is sufficiently large [Boozer, A. and Rechester, A. 1978, Phys. Fluids 21, 682]. The flux-preserving TWM gives a larger stochastic layer width when the perturbation is low, while the LMN gives correct scaling in the low perturbation region. Area-preserving maps such as the LMN respect the Hamiltonian structure of field line trajectories, and have the added advantage of computational efficiency. Also, for a $1\\frac12$ degree of freedom Hamiltonian system such as field lines, maps do not give Arnold diffusion.
Airport security inspection process model and optimization based on GSPN
NASA Astrophysics Data System (ADS)
Mao, Shuainan
2018-04-01
Aiming at the efficiency of airport security inspection process, Generalized Stochastic Petri Net is used to establish the security inspection process model. The model is used to analyze the bottleneck problem of airport security inspection process. The solution to the bottleneck is given, which can significantly improve the efficiency and reduce the waiting time by adding the place for people to remove their clothes and the X-ray detector.
A stochastic approach for automatic generation of urban drainage systems.
Möderl, M; Butler, D; Rauch, W
2009-01-01
Typically, performance evaluation of new developed methodologies is based on one or more case studies. The investigation of multiple real world case studies is tedious and time consuming. Moreover extrapolating conclusions from individual investigations to a general basis is arguable and sometimes even wrong. In this article a stochastic approach is presented to evaluate new developed methodologies on a broader basis. For the approach the Matlab-tool "Case Study Generator" is developed which generates a variety of different virtual urban drainage systems automatically using boundary conditions e.g. length of urban drainage system, slope of catchment surface, etc. as input. The layout of the sewer system is based on an adapted Galton-Watson branching process. The sub catchments are allocated considering a digital terrain model. Sewer system components are designed according to standard values. In total, 10,000 different virtual case studies of urban drainage system are generated and simulated. Consequently, simulation results are evaluated using a performance indicator for surface flooding. Comparison between results of the virtual and two real world case studies indicates the promise of the method. The novelty of the approach is that it is possible to get more general conclusions in contrast to traditional evaluations with few case studies.
Optimal crop selection and water allocation under limited water supply in irrigation
NASA Astrophysics Data System (ADS)
Stange, Peter; Grießbach, Ulrike; Schütze, Niels
2015-04-01
Due to climate change, extreme weather conditions such as droughts may have an increasing impact on irrigated agriculture. To cope with limited water resources in irrigation systems, a new decision support framework is developed which focuses on an integrated management of both irrigation water supply and demand at the same time. For modeling the regional water demand, local (and site-specific) water demand functions are used which are derived from optimized agronomic response on farms scale. To account for climate variability the agronomic response is represented by stochastic crop water production functions (SCWPF). These functions take into account different soil types, crops and stochastically generated climate scenarios. The SCWPF's are used to compute the water demand considering different conditions, e.g., variable and fixed costs. This generic approach enables the consideration of both multiple crops at farm scale as well as of the aggregated response to water pricing at a regional scale for full and deficit irrigation systems. Within the SAPHIR (SAxonian Platform for High Performance IRrigation) project a prototype of a decision support system is developed which helps to evaluate combined water supply and demand management policies.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer.
Measuring Efficiency of Secondary Healthcare Providers in Slovenia
Blatnik, Patricia; Bojnec, Štefan; Tušak, Matej
2017-01-01
Abstract The chief aim of this study was to analyze secondary healthcare providers' efficiency, focusing on the efficiency analysis of Slovene general hospitals. We intended to present a complete picture of technical, allocative, and cost or economic efficiency of general hospitals. Methods We researched the aspects of efficiency with two econometric methods. First, we calculated the necessary quotients of efficiency with the stochastic frontier analyze (SFA), which are realized by econometric evaluation of stochastic frontier functions; then, with the data envelopment analyze (DEA), we calculated the necessary quotients that are based on the linear programming method. Results Results on measures of efficiency showed that the two chosen methods produced two different conclusions. The SFA method concluded Celje General Hospital is the most efficient general hospital, whereas the DEA method concluded Brežice General Hospital was the hospital to be declared as the most efficient hospital. Conclusion Our results are a useful tool that can aid managers, payers, and designers of healthcare policy to better understand how general hospitals operate. The participants can accordingly decide with less difficulty on any further business operations of general hospitals, having the best practices of general hospitals at their disposal. PMID:28730180
Clustering of financial time series with application to index and enhanced index tracking portfolio
NASA Astrophysics Data System (ADS)
Dose, Christian; Cincotti, Silvano
2005-09-01
A stochastic-optimization technique based on time series cluster analysis is described for index tracking and enhanced index tracking problems. Our methodology solves the problem in two steps, i.e., by first selecting a subset of stocks and then setting the weight of each stock as a result of an optimization process (asset allocation). Present formulation takes into account constraints on the number of stocks and on the fraction of capital invested in each of them, whilst not including transaction costs. Computational results based on clustering selection are compared to those of random techniques and show the importance of clustering in noise reduction and robust forecasting applications, in particular for enhanced index tracking.
Assessing the Total Factor Productivity of Cotton Production in Egypt
Rodríguez, Xosé A.; Elasraag, Yahia H.
2015-01-01
The main objective of this paper is to decompose the productivity growth of Egyptian cotton production. We employ the stochastic frontier approach and decompose the changes in total factor productivity (CTFP) growth into four components: technical progress (TP), changes in scale component (CSC), changes in allocative efficiency (CAE), and changes in technical efficiency (CTE). Considering a situation of scarce statistical information, we propose four alternative empirical models, with the purpose of looking for convergence in the results. The results provide evidence that in this production system total productivity does not increase, which is mainly due to the negative average contributions of CAE and TP. Policy implications are offered in light of the results. PMID:25625318
Assessing the total factor productivity of cotton production in Egypt.
Rodríguez, Xosé A; Elasraag, Yahia H
2015-01-01
The main objective of this paper is to decompose the productivity growth of Egyptian cotton production. We employ the stochastic frontier approach and decompose the changes in total factor productivity (CTFP) growth into four components: technical progress (TP), changes in scale component (CSC), changes in allocative efficiency (CAE), and changes in technical efficiency (CTE). Considering a situation of scarce statistical information, we propose four alternative empirical models, with the purpose of looking for convergence in the results. The results provide evidence that in this production system total productivity does not increase, which is mainly due to the negative average contributions of CAE and TP. Policy implications are offered in light of the results.
A scalable delivery framework and a pricing model for streaming media with advertisements
NASA Astrophysics Data System (ADS)
Al-Hadrusi, Musab; Sarhan, Nabil J.
2008-01-01
This paper presents a delivery framework for streaming media with advertisements and an associated pricing model. The delivery model combines the benefits of periodic broadcasting and stream merging. The advertisements' revenues are used to subsidize the price of the media content. The pricing is determined based on the total ads' viewing time. Moreover, this paper presents an efficient ad allocation scheme and three modified scheduling policies that are well suited to the proposed delivery framework. Furthermore, we study the effectiveness of the delivery framework and various scheduling polices through extensive simulation in terms of numerous metrics, including customer defection probability, average number of ads viewed per client, price, arrival rate, profit, and revenue.
Predator-prey model for the self-organization of stochastic oscillators in dual populations
NASA Astrophysics Data System (ADS)
Moradi, Sara; Anderson, Johan; Gürcan, Ozgur D.
A predator-prey model of dual populations with stochastic oscillators is presented. A linear cross-coupling between the two populations is introduced that follows the coupling between the motions of a Wilberforce pendulum in two dimensions: one in the longitudinal and the other in torsional plain. Within each population a Kuramoto type competition between the phases is assumed. Thus, the synchronization state of the whole system is controlled by these two types of competitions. The results of the numerical simulations show that by adding the linear cross-coupling interactions predator-prey oscillations between the two populations appear which results in self-regulation of the system by a transfer of synchrony between the two populations. The model represents several important features of the dynamical interplay between the drift wave and zonal flow turbulence in magnetically confined plasmas, and a novel interpretation of the coupled dynamics of drift wave-zonal flow turbulence using synchronization of stochastic oscillator is discussed. Sara Moradi has benefited from a mobility grant funded by the Belgian Federal Science Policy Office and the MSCA of the European Commission (FP7-PEOPLE-COFUND-2008 nº 246540).
Simplified management of ATM traffic
NASA Astrophysics Data System (ADS)
Luoma, Marko; Ilvesmaeki, Mika
1997-10-01
ATM has been under a thorough standardization process for more than ten years. Looking at it now, what have we achieved during this time period? Originally ATM was meant to be an easy and efficient protocol enabling varying services over a single network. What it is turning to be it `yet another ISDN'--network full of hopes and promises but too difficult to implement and expensive to market. The fact is that more and more `nice features' are implemented on the cost of overloading network with hard management procedures. Therefore we need to adopt a new approach. This approach keeps a strong reminder on `what is necessary.' This paper presents starting points for an alternative approach to the traffic management. We refer to this approach as `the minimum management principle.' Choosing of the suitable service classes for the ATM network is made difficult by the fact that the more services one implements the more management he needs. This is especially true for the variable bit rate connections that are usually treated based on the stochastic models. Stochastic model, at its best, can only reveal momentary characteristics in the traffic stream not the long range behavior of it. Our assumption is that ATM will move towards Internet in the sense that strict values for quality make little or no sense in the future. Therefore stochastic modeling of variable bit rate connections seems to be useless. Nevertheless we see that some traffic needs to have strict guarantees and that the only economic way of doing so is to use PCR allocation.
Skinner, James E; Meyer, Michael; Dalsey, William C; Nester, Brian A; Ramalanjaona, George; O’Neil, Brian J; Mangione, Antoinette; Terregino, Carol; Moreyra, Abel; Weiss, Daniel N; Anchin, Jerry M; Geary, Una; Taggart, Pamela
2008-01-01
Heart rate variability (HRV) reflects both cardiac autonomic function and risk of sudden arrhythmic death (AD). Indices of HRV based on linear stochastic models are independent risk factors for AD in postmyocardial infarction (MI) cohorts. Indices based on nonlinear deterministic models have a higher sensitivity and specificity for predicting AD in retrospective data. A new nonlinear deterministic model, the automated Point Correlation Dimension (PD2i), was prospectively evaluated for prediction of AD. Patients were enrolled (N = 918) in 6 emergency departments (EDs) upon presentation with chest pain and being determined to be at risk of acute MI (AMI) >7%. Brief digital ECGs (>1000 heartbeats, ∼15 min) were recorded and automated PD2i results obtained. Out-of-hospital AD was determined by modified Hinkle-Thaler criteria. All-cause mortality at 1 year was 6.2%, with 3.5% being ADs. Of the AD fatalities, 34% were without previous history of MI or diagnosis of AMI. The PD2i prediction of AD had sensitivity = 96%, specificity = 85%, negative predictive value = 99%, and relative risk >24.2 (p ≤ 0.001). HRV analysis by the time-dependent nonlinear PD2i algorithm can accurately predict risk of AD in an ED cohort and may have both life-saving and resource-saving implications for individual risk assessment. PMID:19209249
The Allocation of Teachers in Schools--An Alternative to the Class Size Dialogue.
ERIC Educational Resources Information Center
Loader, David N.
1978-01-01
This article looks beyond class size to such specifics as teachers' load, subject electives available, subject load, and different class groupings in developing a flow chart that gives added understanding and control over the variables relating to the deployment of teachers. (Author/IRT)
Bandwidth auction for SVC streaming in dynamic multi-overlay
NASA Astrophysics Data System (ADS)
Xiong, Yanting; Zou, Junni; Xiong, Hongkai
2010-07-01
In this paper, we study the optimal bandwidth allocation for scalable video coding (SVC) streaming in multiple overlays. We model the whole bandwidth request and distribution process as a set of decentralized auction games between the competing peers. For the upstream peer, a bandwidth allocation mechanism is introduced to maximize the aggregate revenue. For the downstream peer, a dynamic bidding strategy is proposed. It achieves maximum utility and efficient resource usage by collaborating with a content-aware layer dropping/adding strategy. Also, the convergence of the proposed auction games is theoretically proved. Experimental results show that the auction strategies can adapt to dynamic join of competing peers and video layers.
Perez, Claudio A; Cohn, Theodore E; Medina, Leonel E; Donoso, José R
2007-08-31
Stochastic resonance (SR) is the counterintuitive phenomenon in which noise enhances detection of sub-threshold stimuli. The SR psychophysical threshold theory establishes that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. The aim of this study was to test the SR theory by comparing detection results from two different randomly-presented stimulus conditions. In the first condition, optimal noise was present during the whole attention interval; in the second, the optimal noise was restricted to the same time interval as the stimulus. SR threshold theory predicts no difference between the two conditions because noise helps the sub-threshold stimulus to reach threshold in both cases. The psychophysical experimental method used a 300 ms rectangular force pulse as a stimulus within an attention interval of 1.5 s, applied to the index finger of six human subjects in the two distinct conditions. For all subjects we show that in the condition in which the noise was present only when synchronized with the stimulus, detection was better (p<0.05) than in the condition in which the noise was delivered throughout the attention interval. These results provide the first direct evidence that SR threshold theory is incomplete and that a new phenomenon has been identified, which we call Coincidence-Enhanced Stochastic Resonance (CESR). We propose that CESR might occur because subject uncertainty is reduced when noise points at the same temporal window as the stimulus.
EVANS, STEVEN N.; HENING, ALEXANDRU; SCHREIBER, SEBASTIAN J.
2015-01-01
We consider a population living in a patchy environment that varies stochastically in space and time. The population is composed of two morphs (that is, individuals of the same species with different genotypes). In terms of survival and reproductive success, the associated phenotypes differ only in their habitat selection strategies. We compute invasion rates corresponding to the rates at which the abundance of an initially rare morph increases in the presence of the other morph established at equilibrium. If both morphs have positive invasion rates when rare, then there is an equilibrium distribution such that the two morphs coexist; that is, there is a protected polymorphism for habitat selection. Alternatively, if one morph has a negative invasion rate when rare, then it is asymptotically displaced by the other morph under all initial conditions where both morphs are present. We refine the characterization of an evolutionary stable strategy for habitat selection from [Schreiber, 2012] in a mathematically rigorous manner. We provide a necessary and sufficient condition for the existence of an ESS that uses all patches and determine when using a single patch is an ESS. We also provide an explicit formula for the ESS when there are two habitat types. We show that adding environmental stochasticity results in an ESS that, when compared to the ESS for the corresponding model without stochasticity, spends less time in patches with larger carrying capacities and possibly makes use of sink patches, thereby practicing a spatial form of bet hedging. PMID:25151369
Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.
Optimal Operation of Energy Storage in Power Transmission and Distribution
NASA Astrophysics Data System (ADS)
Akhavan Hejazi, Seyed Hossein
In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider uncertainty from various elements, such as solar photovoltaic , electric vehicle chargers, and residential baseloads, in the form of discrete probability functions. In the last part of this thesis we address some other resources and concepts for enhancing the operation of power distribution and transmission systems. In particular, we proposed a new framework to determine the best sites, sizes, and optimal payment incentives under special contracts for committed-type DG projects to offset distribution network investment costs. In this framework, the aim is to allocate DGs such that the profit gained by the distribution company is maximized while each DG unit's individual profit is also taken into account to assure that private DG investment remains economical.
Use of a Portable Stimulator to Treat GWI
2016-10-01
equilibrium in veterans during stochastic noise electrical stim. Electrical Stimulation Time (sec) 0 5 10 15 20 m A -1.0 -0.5 0.0 0.5 1.0 Center of Pressure...AD______________ AWARD NUMBER: W81XWH-14-1-0598 TITLE: Use of a Portable Stimulator to Treat GWI PRINCIPAL INVESTIGATOR: Jorge M... time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this
NASA Technical Reports Server (NTRS)
Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)
2001-01-01
Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-29
... allocations for Atlantic migratory group cobia and establishes control rules for king mackerel, Spanish... migratory groups for cobia; and establishes annual catch limits (ACLs), annual catch targets (ACTs), and... be added back into the FMP at a later date. Cobia Migratory Groups This final rule establishes two...
ERIC Educational Resources Information Center
Bryant, Richard A.; Moulds, Michelle L.; Guthrie, Rachel M.; Dang, Suzanne T.; Mastrodomenico, Julie; Nixon, Reginald D. V.; Felmingham, Kim L.; Hopwood, Sally; Creamer, Mark
2008-01-01
Previous studies have reported that adding cognitive restructuring (CR) to exposure therapy does not enhance treatment gains in posttraumatic stress disorder (PTSD). This study investigated the extent to which CR would augment treatment response when provided with exposure therapy. The authors randomly allocated 118 civilian trauma survivors with…
Navy Littoral Combat Ship (LCS)/Frigate Program: Background and Issues for Congress
2015-12-22
the ships. The Navy continues to review manning to determine appropriate levels, and is adding 20 berths to all seaframes. The increased berthing ...ships would determine the allocation of the three FY2010 ships, with the winning team getting two of the FY2010 ships and the other team getting one
An Exploration of Challenges Facing Division III Athletic Directors
ERIC Educational Resources Information Center
Engbers, Jeffrey L.
2011-01-01
The purpose of this study was to establish a basic understanding of the challenges associated with directing athletic programs at NCAA Division III Institutions. Specifically, this study identified the frequency, intensity, and time allocated to common challenges facing the position of the NCAA Division III AD. The challenges were examined using…
NASA Astrophysics Data System (ADS)
Pan, Wei; Wang, Xianjia; Zhong, Yong-guang; Yu, Lean; Jie, Cao; Ran, Lun; Qiao, Han; Wang, Shouyang; Xu, Xianhao
2012-06-01
Data communication service has an important influence on e-commerce. The key challenge for the users is, ultimately, to select a suitable provider. However, in this article, we do not focus on this aspect but the viewpoint and decision-making of providers for order allocation and pricing policy when orders exceed service capacity. It is a multiple criteria decision-making problem such as profit and cancellation ratio. Meanwhile, we know realistic situations in which much of the input information is uncertain. Thus, it becomes very complex in a real-life environment. In this situation, fuzzy sets theory is the best tool for solving this problem. Our fuzzy model is formulated in such a way as to simultaneously consider the imprecision of information, price sensitive demand, stochastic variables, cancellation fee and the general membership function. For solving the problem, a new fuzzy programming is developed. Finally, a numerical example is presented to illustrate the proposed method. The results show that it is effective for determining the suitable order set and pricing policy of provider in data communication service with different quality of service (QoS) levels.
Asymptotic theory of time-varying social networks with heterogeneous activity and tie allocation.
Ubaldi, Enrico; Perra, Nicola; Karsai, Márton; Vezzani, Alessandro; Burioni, Raffaella; Vespignani, Alessandro
2016-10-24
The dynamic of social networks is driven by the interplay between diverse mechanisms that still challenge our theoretical and modelling efforts. Amongst them, two are known to play a central role in shaping the networks evolution, namely the heterogeneous propensity of individuals to i) be socially active and ii) establish a new social relationships with their alters. Here, we empirically characterise these two mechanisms in seven real networks describing temporal human interactions in three different settings: scientific collaborations, Twitter mentions, and mobile phone calls. We find that the individuals' social activity and their strategy in choosing ties where to allocate their social interactions can be quantitatively described and encoded in a simple stochastic network modelling framework. The Master Equation of the model can be solved in the asymptotic limit. The analytical solutions provide an explicit description of both the system dynamic and the dynamical scaling laws characterising crucial aspects about the evolution of the networks. The analytical predictions match with accuracy the empirical observations, thus validating the theoretical approach. Our results provide a rigorous dynamical system framework that can be extended to include other processes shaping social dynamics and to generate data driven predictions for the asymptotic behaviour of social networks.
Asymptotic theory of time-varying social networks with heterogeneous activity and tie allocation
NASA Astrophysics Data System (ADS)
Ubaldi, Enrico; Perra, Nicola; Karsai, Márton; Vezzani, Alessandro; Burioni, Raffaella; Vespignani, Alessandro
2016-10-01
The dynamic of social networks is driven by the interplay between diverse mechanisms that still challenge our theoretical and modelling efforts. Amongst them, two are known to play a central role in shaping the networks evolution, namely the heterogeneous propensity of individuals to i) be socially active and ii) establish a new social relationships with their alters. Here, we empirically characterise these two mechanisms in seven real networks describing temporal human interactions in three different settings: scientific collaborations, Twitter mentions, and mobile phone calls. We find that the individuals’ social activity and their strategy in choosing ties where to allocate their social interactions can be quantitatively described and encoded in a simple stochastic network modelling framework. The Master Equation of the model can be solved in the asymptotic limit. The analytical solutions provide an explicit description of both the system dynamic and the dynamical scaling laws characterising crucial aspects about the evolution of the networks. The analytical predictions match with accuracy the empirical observations, thus validating the theoretical approach. Our results provide a rigorous dynamical system framework that can be extended to include other processes shaping social dynamics and to generate data driven predictions for the asymptotic behaviour of social networks.
Active-Pixel Image Sensor With Analog-To-Digital Converters
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.
1995-01-01
Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.
Designing Two-Layer Optical Networks with Statistical Multiplexing
NASA Astrophysics Data System (ADS)
Addis, B.; Capone, A.; Carello, G.; Malucelli, F.; Fumagalli, M.; Pedrin Elli, E.
The possibility of adding multi-protocol label switching (MPLS) support to transport networks is considered an important opportunity by telecom carriers that want to add packet services and applications to their networks. However, the question that arises is whether it is suitable to have MPLS nodes just at the edge of the network to collect packet traffic from users, or also to introduce MPLS facilities on a subset of the core nodes in order to exploit packet switching flexibility and multiplexing, thus providing induction of a better bandwidth allocation. In this article, we address this complex decisional problem with the support of a mathematical programming approach. We consider two-layer networks where MPLS is overlaid on top of transport networks-synchronous digital hierarchy (SDH) or wavelength division multiplexing (WDM)-depending on the required link speed. The discussions' decisions take into account the trade-off between the cost of adding MPLS support in the core nodes and the savings in the link bandwidth allocation due to the statistical multiplexing and the traffic grooming effects induced by MPLS nodes. The traffic matrix specifies for each point-to-point request a pair of values: a mean traffic value and an additional one. Using this traffic model, the effect of statistical multiplexing on a link allows the allocation of a capacity equal to the sum of all the mean values of the traffic demands routed on the link and only the highest additional one. The proposed approach is suitable to solve real instances in reasonable time.
What foods are US supermarkets promoting? A content analysis of supermarket sales circulars.
Martin-Biggers, Jennifer; Yorkin, Meredith; Aljallad, Carena; Ciecierski, Caroline; Akhabue, Ivbaria; McKinley, Jessica; Hernandez, Katherine; Yablonsky, Courtney; Jackson, Rachel; Quick, Virginia; Byrd-Bredbenner, Carol
2013-03-01
This study compared the types of foods advertised in supermarket newspaper circulars across geographic region (US Census regions: northeast [n=9], midwest [n=15], south [n=14], and west [n=13]), obesity-rate region (i.e., states with CDC adult obesity rates of <25% [n=14], 25 to <30% [n=24], and ≥ 30% [n=13]), and with MyPlate recommendations. All food advertisements on the first page of each circular were measured (±0.12-in.) to determine the proportion of space occupied and categorized according to food group. Overall, ≥ 50% of the front page of supermarket sales circulars was devoted to protein foods and grains; fruits, vegetables, and dairy, combined, were allocated only about 25% of the front page. The southern geographic region and the highest obesity-rate region both devoted significantly more advertising space to sweets, particularly sugar-sweetened beverages. The lowest obesity-rate region and western geographic region allocated the most space to fruits. Vegetables were allocated the least space in the western geographic region. Grains were the only food group represented in ads in proportions approximately equal to amounts depicted in the MyPlate icon. Protein foods exceeded and fruits, dairy, and vegetables fell below comparable MyPlate proportional areas. Findings suggest supermarket ads do not consistently emphasize foods that support healthy weight and MyPlate recommendations. More research is needed to determine how supermarket newspaper circulars can be used to promote healthy dietary patterns. Copyright © 2012 Elsevier Ltd. All rights reserved.
OPTN/SRTR 2011 Annual Data Report: lung.
Valapour, M; Paulson, K; Smith, J M; Hertz, M I; Skeans, M A; Heubner, B M; Edwards, L B; Snyder, J J; Israni, A K; Kasiske, B L
2013-01-01
Lungs are allocated in part based on the Lung Allocation Score (LAS), which considers risk of death without transplant and posttransplant. Wait-list additions have been increasing steadily after an initial decline following LAS implementation. In 2011, the largest number of adult candidates were added to the waiting list in a single year since 1998; donation and transplant rates have been unable to keep pace with wait-list additions. Candidates aged 65 years or older have been added faster than candidates in other age groups. After an initial decline following LAS implementation, wait-list mortality increased to 15.7 per 100 wait-list years in 2011. Short- and long-term graft survival improved in 2011; 10-year graft failure fell to an all-time low. Since 1998, the number of new pediatric (aged 0-11 years) candidates added yearly to the waiting list has declined. In 2011, 19 pediatric lung transplants were performed, a transplant rate of 34.7 per 100 wait-list years. The percentage of patients hospitalized before transplant has not changed. Both graft and patient survival have continued to improve over the past decade. Posttransplant complications for pediatric lung transplant recipients, similar to complications for adult recipients, include hypertension, renal dysfunction, diabetes, bronchiolitis obliterans syndrome, and malignancy. © Copyright 2012 The American Society of Transplantation and the American Society of Transplant Surgeons.
A system dynamics model of a large R&D program
NASA Astrophysics Data System (ADS)
Ahn, Namsung
Organizations with large R&D activities must deal with a hierarchy of decision regarding resource allocation. At the highest level of allocation, the decision is related to the total allocation to R&D as some portion of revenue. The middle level of allocation deals with the allocation among phases of the R&D process. The lowest level of decisions relates to the resource allocation to specific projects within a specific phase. This study focuses on developing an R&D model to deal with the middle level of allocation, i.e., the allocation among phases of research such as basic research, development, and demonstration. The methodology used to develop the R&D model is System Dynamics. Our modeling concept is innovative in representing each phase of R&D as consisting of two parts: projects under way, and an inventory of successful but not-yet- exploited projects. In a simple world, this concept can yield an exact analytical solution for allocation of resources among phases. But in a real world, the concept should be improved by adding more complex structures with nonlinear behaviors. Two particular nonlinear feedbacks are incorporated into the R&D model. The probability of success for any specific project is assumed partly dependent upon resources allocated to the project. Further, the time required to reach a conclusion regarding the success or failure of a project is also assumed dependent upon the level of resources allocated. In addition, the number of successful projects partly depends on the inventory of potential ideas in the previous stage that can be exploited. This model can provide R&D management with insights into the effect of changing allocations to phases whether those changes are internally or externally driven. With this model, it is possible to study the effectiveness of management decisions in a continuous fashion. Managers can predict payoffs for a host of different policies. In addition, as new research results accumulate, a re- assessment of program goals can be implemented easily and allocations adjusted to enhance continuously the likelihood of success, and to optimize payoffs. Finally, this model can give managers a quantitative rationale for program evaluation and permit the quantitative assessment of various externally imposed changes. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Production, depreciation and the size distribution of firms
NASA Astrophysics Data System (ADS)
Ma, Qi; Chen, Yongwang; Tong, Hui; Di, Zengru
2008-05-01
Many empirical researches indicate that firm size distributions in different industries or countries exhibit some similar characters. Among them the fact that many firm size distributions obey power-law especially for the upper end has been mostly discussed. Here we present an agent-based model to describe the evolution of manufacturing firms. Some basic economic behaviors are taken into account, which are production with decreasing marginal returns, preferential allocation of investments, and stochastic depreciation. The model gives a steady size distribution of firms which obey power-law. The effect of parameters on the power exponent is analyzed. The theoretical results are given based on both the Fokker-Planck equation and the Kesten process. They are well consistent with the numerical results.
Chen, Bor-Sen; Hsu, Chih-Yuan
2012-10-26
Collective rhythms of gene regulatory networks have been a subject of considerable interest for biologists and theoreticians, in particular the synchronization of dynamic cells mediated by intercellular communication. Synchronization of a population of synthetic genetic oscillators is an important design in practical applications, because such a population distributed over different host cells needs to exploit molecular phenomena simultaneously in order to emerge a biological phenomenon. However, this synchronization may be corrupted by intrinsic kinetic parameter fluctuations and extrinsic environmental molecular noise. Therefore, robust synchronization is an important design topic in nonlinear stochastic coupled synthetic genetic oscillators with intrinsic kinetic parameter fluctuations and extrinsic molecular noise. Initially, the condition for robust synchronization of synthetic genetic oscillators was derived based on Hamilton Jacobi inequality (HJI). We found that if the synchronization robustness can confer enough intrinsic robustness to tolerate intrinsic parameter fluctuation and extrinsic robustness to filter the environmental noise, then robust synchronization of coupled synthetic genetic oscillators is guaranteed. If the synchronization robustness of a population of nonlinear stochastic coupled synthetic genetic oscillators distributed over different host cells could not be maintained, then robust synchronization could be enhanced by external control input through quorum sensing molecules. In order to simplify the analysis and design of robust synchronization of nonlinear stochastic synthetic genetic oscillators, the fuzzy interpolation method was employed to interpolate several local linear stochastic coupled systems to approximate the nonlinear stochastic coupled system so that the HJI-based synchronization design problem could be replaced by a simple linear matrix inequality (LMI)-based design problem, which could be solved with the help of LMI toolbox in MATLAB easily. If the synchronization robustness criterion, i.e. the synchronization robustness ≥ intrinsic robustness + extrinsic robustness, then the stochastic coupled synthetic oscillators can be robustly synchronized in spite of intrinsic parameter fluctuation and extrinsic noise. If the synchronization robustness criterion is violated, external control scheme by adding inducer can be designed to improve synchronization robustness of coupled synthetic genetic oscillators. The investigated robust synchronization criteria and proposed external control method are useful for a population of coupled synthetic networks with emergent synchronization behavior, especially for multi-cellular, engineered networks.
2012-01-01
Background Collective rhythms of gene regulatory networks have been a subject of considerable interest for biologists and theoreticians, in particular the synchronization of dynamic cells mediated by intercellular communication. Synchronization of a population of synthetic genetic oscillators is an important design in practical applications, because such a population distributed over different host cells needs to exploit molecular phenomena simultaneously in order to emerge a biological phenomenon. However, this synchronization may be corrupted by intrinsic kinetic parameter fluctuations and extrinsic environmental molecular noise. Therefore, robust synchronization is an important design topic in nonlinear stochastic coupled synthetic genetic oscillators with intrinsic kinetic parameter fluctuations and extrinsic molecular noise. Results Initially, the condition for robust synchronization of synthetic genetic oscillators was derived based on Hamilton Jacobi inequality (HJI). We found that if the synchronization robustness can confer enough intrinsic robustness to tolerate intrinsic parameter fluctuation and extrinsic robustness to filter the environmental noise, then robust synchronization of coupled synthetic genetic oscillators is guaranteed. If the synchronization robustness of a population of nonlinear stochastic coupled synthetic genetic oscillators distributed over different host cells could not be maintained, then robust synchronization could be enhanced by external control input through quorum sensing molecules. In order to simplify the analysis and design of robust synchronization of nonlinear stochastic synthetic genetic oscillators, the fuzzy interpolation method was employed to interpolate several local linear stochastic coupled systems to approximate the nonlinear stochastic coupled system so that the HJI-based synchronization design problem could be replaced by a simple linear matrix inequality (LMI)-based design problem, which could be solved with the help of LMI toolbox in MATLAB easily. Conclusion If the synchronization robustness criterion, i.e. the synchronization robustness ≥ intrinsic robustness + extrinsic robustness, then the stochastic coupled synthetic oscillators can be robustly synchronized in spite of intrinsic parameter fluctuation and extrinsic noise. If the synchronization robustness criterion is violated, external control scheme by adding inducer can be designed to improve synchronization robustness of coupled synthetic genetic oscillators. The investigated robust synchronization criteria and proposed external control method are useful for a population of coupled synthetic networks with emergent synchronization behavior, especially for multi-cellular, engineered networks. PMID:23101662
Neural cryptography with feedback.
Ruttor, Andreas; Kinzel, Wolfgang; Shacham, Lanir; Kanter, Ido
2004-04-01
Neural cryptography is based on a competition between attractive and repulsive stochastic forces. A feedback mechanism is added to neural cryptography which increases the repulsive forces. Using numerical simulations and an analytic approach, the probability of a successful attack is calculated for different model parameters. Scaling laws are derived which show that feedback improves the security of the system. In addition, a network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message.
Stochastic models for atomic clocks
NASA Technical Reports Server (NTRS)
Barnes, J. A.; Jones, R. H.; Tryon, P. V.; Allan, D. W.
1983-01-01
For the atomic clocks used in the National Bureau of Standards Time Scales, an adequate model is the superposition of white FM, random walk FM, and linear frequency drift for times longer than about one minute. The model was tested on several clocks using maximum likelihood techniques for parameter estimation and the residuals were acceptably random. Conventional diagnostics indicate that additional model elements contribute no significant improvement to the model even at the expense of the added model complexity.
QoS-Oriented High Dynamic Resource Allocation in Vehicular Communication Networks
2014-01-01
Vehicular ad hoc networks (VANETs) are emerging as new research area and attracting an increasing attention from both industry and research communities. In this context, a dynamic resource allocation policy that maximizes the use of available resources and meets the quality of service (QoS) requirement of constraining applications is proposed. It is a combination of a fair packet scheduling policy and a new adaptive QoS oriented call admission control (CAC) scheme based on the vehicle density variation. This scheme decides whether the connection request is to be admitted into the system, while providing fair access and guaranteeing the desired throughput. The proposed algorithm showed good performance in testing in real world environment. PMID:24616639
NASA Astrophysics Data System (ADS)
Sinsky, E.; Zhu, Y.; Li, W.; Guan, H.; Melhauser, C.
2017-12-01
Optimal forecast quality is crucial for the preservation of life and property. Improving monthly forecast performance over both the tropics and extra-tropics requires attention to various physical aspects such as the representation of the underlying SST, model physics and the representation of the model physics uncertainty for an ensemble forecast system. This work focuses on the impact of stochastic physics, SST and the convection scheme on forecast performance for the sub-seasonal scale over the tropics and extra-tropics with emphasis on the Madden-Julian Oscillation (MJO). A 2-year period is evaluated using the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS). Three experiments with different configurations than the operational GEFS were performed to illustrate the impact of the stochastic physics, SST and convection scheme. These experiments are compared against a control experiment (CTL) which consists of the operational GEFS but its integration is extended from 16 to 35 days. The three configurations are: 1) SPs, which uses a Stochastically Perturbed Physics Tendencies (SPPT), Stochastic Perturbed Humidity (SHUM) and Stochastic Kinetic Energy Backscatter (SKEB); 2) SPs+SST_bc, which uses a combination of SPs and a bias-corrected forecast SST from the NCEP Climate Forecast System Version 2 (CFSv2); and 3) SPs+SST_bc+SA_CV, which combines SPs, a bias-corrected forecast SST and a scale aware convection scheme. When comparing to the CTL experiment, SPs shows substantial improvement. The MJO skill has improved by about 4 lead days during the 2-year period. Improvement is also seen over the extra-tropics due to the updated stochastic physics, where there is a 3.1% and a 4.2% improvement during weeks 3 and 4 over the northern hemisphere and southern hemisphere, respectively. Improvement is also seen when the bias-corrected CFSv2 SST is combined with SPs. Additionally, forecast performance enhances when the scale aware convection scheme (SPs+SST_bc+SA_CV) is added, especially over the tropics. Among the three experiments, the SPs+SST_bc+SA_CV is the best configuration in MJO forecast skill.
Sonuga-Barke, Edmund J S; Thompson, Margaret; Daley, David; Laver-Bradbury, Cathy
2004-11-01
The effectiveness of parent training (PT) when delivered as part of specialist tier-two services for preschool AD/HD children has been recently demonstrated. To assess the effectiveness of the same PT programme when delivered as part of routine primary care by non-specialist nurses. A sample of 89 3-year-old children with preschool AD/HD took part in a controlled trial of an eight-week (one hour a week), health visitor delivered, PT package. Children, allocated randomly to PT (n = 59) and waiting list control (WLC; n = 30) groups, were compared. PT did not reduce AD/HD symptoms. Maternal well-being decreased in both PT and WLC groups. While PT is an effective intervention for preschool AD/HD when delivered in specialized settings, these benefits do not appear to generalize when programme are delivered as part of routine primary care by non-specialist nurses.
Modelling and performance analysis of clinical pathways using the stochastic process algebra PEPA.
Yang, Xian; Han, Rui; Guo, Yike; Bradley, Jeremy; Cox, Benita; Dickinson, Robert; Kitney, Richard
2012-01-01
Hospitals nowadays have to serve numerous patients with limited medical staff and equipment while maintaining healthcare quality. Clinical pathway informatics is regarded as an efficient way to solve a series of hospital challenges. To date, conventional research lacks a mathematical model to describe clinical pathways. Existing vague descriptions cannot fully capture the complexities accurately in clinical pathways and hinders the effective management and further optimization of clinical pathways. Given this motivation, this paper presents a clinical pathway management platform, the Imperial Clinical Pathway Analyzer (ICPA). By extending the stochastic model performance evaluation process algebra (PEPA), ICPA introduces a clinical-pathway-specific model: clinical pathway PEPA (CPP). ICPA can simulate stochastic behaviours of a clinical pathway by extracting information from public clinical databases and other related documents using CPP. Thus, the performance of this clinical pathway, including its throughput, resource utilisation and passage time can be quantitatively analysed. A typical clinical pathway on stroke extracted from a UK hospital is used to illustrate the effectiveness of ICPA. Three application scenarios are tested using ICPA: 1) redundant resources are identified and removed, thus the number of patients being served is maintained with less cost; 2) the patient passage time is estimated, providing the likelihood that patients can leave hospital within a specific period; 3) the maximum number of input patients are found, helping hospitals to decide whether they can serve more patients with the existing resource allocation. ICPA is an effective platform for clinical pathway management: 1) ICPA can describe a variety of components (state, activity, resource and constraints) in a clinical pathway, thus facilitating the proper understanding of complexities involved in it; 2) ICPA supports the performance analysis of clinical pathway, thereby assisting hospitals to effectively manage time and resources in clinical pathway.
Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing
NASA Technical Reports Server (NTRS)
Jones, Robert L.; Goode, Plesent W. (Technical Monitor)
2000-01-01
The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.
Arnold, Samuel R C; Riches, Vivienne C; Stancliffe, Roger J
2015-09-01
Internationally, various approaches are used for the allocation of individualized funding. When using a databased approach, a key question is the predictive validity of adaptive behavior versus support needs assessment. This article reports on a subset of data from a larger project that allowed for a comparison of support needs and adaptive behavior assessments when predicting person-centered funding allocation. The first phase of the project involved a trial of the Inventory for Client and Agency Planning (ICAP) adaptive behavior and Instrument for the Classification and Assessment of Support Needs (I-CAN)-Brief Research version support needs assessments. Participants were in receipt of an individual support package allocated using a person-centered planning process, and were stable in their support arrangements. Regression analysis showed that the most useful items in predicting funding allocation came from the I-CAN-Brief Research. No additional variance could be explained by adding the ICAP, or using the ICAP alone. A further unique approach of including only items from the I-CAN-Brief Research marked as funded supports showed high predictive validity. It appears support need is more effective at determining resource need than adaptive behavior.
Modeling Limited Foresight in Water Management Systems
NASA Astrophysics Data System (ADS)
Howitt, R.
2005-12-01
The inability to forecast future water supplies means that their management inevitably occurs under situations of limited foresight. Three modeling problems arise, first what type of objective function is a manager with limited foresight optimizing? Second how can we measure these objectives? Third can objective functions that incorporate uncertainty be integrated within the structure of optimizing water management models? The paper reviews the concepts of relative risk aversion and intertemporal substitution that underlie stochastic dynamic preference functions. Some initial results from the estimation of such functions for four different dam operations in northern California are presented and discussed. It appears that the path of previous water decisions and states influences the decision-makers willingness to trade off water supplies between periods. A compromise modeling approach that incorporates carry-over value functions under limited foresight within a broader net work optimal water management model is developed. The approach uses annual carry-over value functions derived from small dimension stochastic dynamic programs embedded within a larger dimension water allocation network. The disaggregation of the carry-over value functions to the broader network is extended using the space rule concept. Initial results suggest that the solution of such annual nonlinear network optimizations is comparable to, or faster than, the solution of linear network problems over long time series.
Interactions among invasive plants: Lessons from Hawai‘i
D'Antonio, Carla M.; Ostertag, Rebecca; Cordell, Susan; Yelenik, Stephanie G.
2017-01-01
Most ecosystems have multiple-plant invaders rather than single-plant invaders, yet ecological studies and management actions focus largely on single invader species. There is a need for general principles regarding invader interactions across varying environmental conditions, so that secondary invasions can be anticipated and managers can allocate resources toward pretreatment or postremoval actions. By reviewing removal experiments conducted in three Hawaiian ecosystems (a dry tropical forest, a seasonally dry mesic forest, and a lowland wet forest), we evaluate the roles environmental harshness, priority effects, productivity potential, and species interactions have in influencing secondary invasions, defined here as invasions that are influenced either positively (facilitation) or negatively (inhibition/priority effects) by existing invaders. We generate a conceptual model with a surprise index to describe whether long-term plant invader composition and dominance is predictable or stochastic after a system perturbation such as a removal experiment. Under extremely low resource availability, the surprise index is low, whereas under intermediate-level resource environments, invader dominance is more stochastic and the surprise index is high. At high resource levels, the surprise index is intermediate: Invaders are likely abundant in the environment but their response to a perturbation is more predictable than at intermediate resource levels. We suggest further testing across environmental gradients to determine key variables that dictate the predictability of postremoval invader composition.
Li, Shuangyan; Li, Xialian; Zhang, Dezhi; Zhou, Lingyun
2017-01-01
This study develops an optimization model to integrate facility location and inventory control for a three-level distribution network consisting of a supplier, multiple distribution centers (DCs), and multiple retailers. The integrated model addressed in this study simultaneously determines three types of decisions: (1) facility location (optimal number, location, and size of DCs); (2) allocation (assignment of suppliers to located DCs and retailers to located DCs, and corresponding optimal transport mode choices); and (3) inventory control decisions on order quantities, reorder points, and amount of safety stock at each retailer and opened DC. A mixed-integer programming model is presented, which considers the carbon emission taxes, multiple transport modes, stochastic demand, and replenishment lead time. The goal is to minimize the total cost, which covers the fixed costs of logistics facilities, inventory, transportation, and CO2 emission tax charges. The aforementioned optimal model was solved using commercial software LINGO 11. A numerical example is provided to illustrate the applications of the proposed model. The findings show that carbon emission taxes can significantly affect the supply chain structure, inventory level, and carbon emission reduction levels. The delay rate directly affects the replenishment decision of a retailer. PMID:28103246
A Context-Aware Paradigm for Information Discovery and Dissemination in Mobile Environments
ERIC Educational Resources Information Center
Lundquist, Doug
2011-01-01
The increasing power and ubiquity of mobile wireless devices is enabling real-time information delivery for many diverse applications. A crucial question is how to allocate finite network resources efficiently and fairly despite the uncertainty common in highly dynamic mobile ad hoc networks. We propose a set of routing protocols, Self-Balancing…
Navy Littoral Combat Ship (LCS)/Frigate Program: Background and Issues for Congress
2016-01-05
and is adding 20 berths to all seaframes. The increased berthing supports small increases in the size of the core crew, mission package detachments...for both the FY2009 ships and the FY2010 ships would determine the allocation of the three FY2010 ships, with the winning team getting two of the
The Massachusetts Community College Performance-Based Funding Formula: A New Model for New England?
ERIC Educational Resources Information Center
Salomon-Fernandez, Yves
2014-01-01
The Massachusetts community college system is entering a second year with funding for each of its 15 schools determined using a new performance-based formula. Under the new model, 50% of each college's allocation is based on performance on metrics related to enrollment and student success, with added incentives for "at-risk" students…
ERIC Educational Resources Information Center
Hatfield, Laura M.; Hatfield, Lance C.
2014-01-01
A corporate sponsorship program, built on an appropriate knowledge base, can help bridge the funding gap that exists in interscholastic athletics between the cost of fielding teams and decreasing budget allocations. This article describes sponsorship in its most effective form as a business concept, the value that athletic departments provide to…
Introducing priority setting and resource allocation in home and community care programs.
Urquhart, Bonnie; Mitton, Craig; Peacock, Stuart
2008-01-01
To use evidence from research to identify and implement priority setting and resource allocation that incorporates both ethical practices and economic principles. Program budgeting and marginal analysis (PBMA) is based on two key economic principles: opportunity cost (i.e. doing one thing instead of another) and the margin (i.e. resource allocation should result in maximum benefit for available resources). An ethical framework for priority setting and resource allocation known as Accountability for Reasonableness (A4R) focuses on making sure that resource allocations are based on a fair decision-making process. It includes the following four conditions: publicity; relevance; appeals; and enforcement. More recent literature on the topic suggests that a fifth condition, that of empowerment, should be added to the Framework. The 2007-08 operating budget for Home and Community Care, excluding the residential sector, was developed using PBMA and incorporating the A4R conditions. Recommendations developed using PBMA were forwarded to the Executive Committee, approved and implemented for the 2007-08 fiscal year operating budget. In addition there were two projects approved for approximately $200,000. PBMA is an improvement over previous practice. Managers of Home and Community Care are committed to using the process for the 2008-09 fiscal year operating budget and expanding its use to include mental health and addictions services. In addition, managers of public health prevention and promotion services are considering using the process.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
Deterministic Methods in Stochastic Optimal Control.
1992-10-01
as (0.1) by adding a correction terito Ot ,h drift . L.tt us con|sidehr the Stoclia.tic optimtal control problem (0.1),(0.2). The dynaumtic progra...with ant icipative drift ) which will be done in Secioni I .sing Ihli decomposition of solutions of SI)E’s (see Kunila [14. p. 268] and Ocone and...programllitig. In the case when nonanticipating controls appear in the drift the Wong-Zakai con•’.rgence result slates that under smoothness and boundedness
Intrinsic map dynamics exploration for uncharted effective free-energy landscapes
Covino, Roberto; Coifman, Ronald R.; Gear, C. William; Georgiou, Anastasia S.; Kevrekidis, Ioannis G.
2017-01-01
We describe and implement a computer-assisted approach for accelerating the exploration of uncharted effective free-energy surfaces (FESs). More generally, the aim is the extraction of coarse-grained, macroscopic information from stochastic or atomistic simulations, such as molecular dynamics (MD). The approach functionally links the MD simulator with nonlinear manifold learning techniques. The added value comes from biasing the simulator toward unexplored phase-space regions by exploiting the smoothness of the gradually revealed intrinsic low-dimensional geometry of the FES. PMID:28634293
Executing effective road safety advertising: are big production budgets necessary?
Donovan, R J; Jalleh, G; Henley, N
1999-05-01
Twelve (12) road safety television commercials (TVCs) ranging in production costs from $A15,000 to $A250,000 (current prices) were evaluated using standard advertising pre-test procedures. The twelve ads covered four road safety behaviours (speeding; drink driving; fatigue; and inattention), and included a variety of executional types within and across behaviours. One ad in each of the four behaviours was an expensive TAC and ($A200,000 or more). The testing procedure assessed respondents' self-reported impact of the ad on their future intentions to comply with the road safety behavior advocated in the ad. Just under 1000 appropriately screened motor vehicle drivers license holders were recruited via street intercept methods and randomly allocated to one of the twelve and exposure conditions. The results showed that while the two best performing ads were highly dramatic TAC ads showing graphic crash scenes, these were also the most expensive ads to produce, and, being 60 and 90 s, the most expensive to air. In several cases, 30 s low cost talking heads testimonials performed equally as well as their far more expensive counterparts. We conclude that big production budgets may not be necessary to create effective road safety advertising.
Gomes, Cristina M; McCullough, Michael E
2015-12-01
Shariff and Norenzayan (2007) discovered that people allocate more money to anonymous strangers in a dictator game following a scrambled sentence task that involved words with religious meanings. We conducted a direct replication of key elements of Shariff and Norenzayan's (2007) Experiment 2, with some additional changes. Specifically, we (a) collected data from a much larger sample of participants (N = 650); (b) added a second religious priming condition that attempted to prime thoughts of religion less conspicuously; (c) modified the wording of some of their task explanations to avoid deceiving our participants; (d) added a more explicit awareness probe; (e) reduced prime-probe time; and (f) performed statistical analyses that are more appropriate for non-normal data. We did not find a statistically significant effect for religious priming. Additional tests for possible between-subjects moderators of the religious priming effect also yielded nonsignificant results. A small-scale meta-analysis, which included all known studies investigating the effect of religious priming on dictator game offers, suggested that the mean effect size is not different from zero, although the wide confidence intervals indicate that conclusions regarding this effect should be drawn with caution. Finally, we found some evidence of small-study effects: Studies with larger samples tended to produce smaller effects (a pattern consistent with publication bias). Overall, these results suggest that the effects of religious priming on dictator game allocations might be either not reliable or else quite sensitive to differences in methods or in the populations in which the effect has been examined. (c) 2015 APA, all rights reserved).
Kiarie, E; Walsh, M C; Romero, L F; Arent, S; Ravindran, V
2017-09-01
The effects of adaptation (AD) to xylanase-supplemented diets on nutrient and fiber utilization in 21-d-old broilers were investigated. Six treatments, arranged in two levels of AD (starting at d 0 or d 14 of age) and three levels of xylanase (0 or 2,500 or 5,000 xylanase units/kg feed) were used. All diets had 500 phytase U/kg and 0.3% TiO2 as indigestible marker. A total of 384 d old male broiler (Ross 308) chicks were divided into two groups. The first group was assigned on weight basis to 24 cages (8 chicks per cage) and randomly allocated to the diets from d 0. Birds in the second group were reared on a commercial starter diet in the same room for 13 d. On d 14, the birds were individually weighed, assigned on weight basis to 24 cages (8 chicks per cage), and randomly allocated to the diets. Birds had free access to experimental diets and water. Excreta samples were collected from d 18 to 21. On d 21, all birds were euthanized to access ileal digesta. There was no interaction (P > 0.05) between AD and xylanase on the apparent ileal digestibility (AID) and apparent retention (AR) of components. The main effect of AD was such that the birds exposed to diets for 7 d (d 14 to 21) had higher (P < 0.01) AID of energy than those exposed for 21 d (d 0 to 21). In contrast, birds exposed to diets for 21 d had higher (P < 0.05) AMEn and AR of neutral detergent fiber. Xylanase improvements (P < 0.01) in the AID of energy and AMEn were dose dependent and coincided with linear improvements (P < 0.05) in the AID of nitrogen, fat, and starch. In conclusion, xylanase improvements on retention of fiber and nutrients were independent of AD (7 or 21 d) suggesting that the xylanase effects are not transitional. Greater retention of fiber with longer AD is suggestive of possible microbial adaptation. © 2017 Poultry Science Association Inc.
2007-06-01
other databases such as MySQL , Oracle , and Derby will be added to future versions of the program. Setting a factor requires more than changing a single...Non-Penetrating vs . Penetrating Results.............106 a. Coverage...Interaction Profile for D U-2 and C RQ-4 .......................................................89 Figure 59. R-Squared vs . Number of Regression Tree
26 CFR 1.409(p)-1T - Prohibited allocations of securities in an S corporation (temporary).
Code of Federal Regulations, 2012 CFR
2012-04-01
....4975-11(c) and (d) of this chapter) that, for the nonallocation year, would otherwise have been added... the plan year. Thus, the fair market value of assets in the disqualified person's account that... on the value of the stock of the S corporation, such as appreciation in such value. Thus, synthetic...
26 CFR 1.409(p)-1T - Prohibited allocations of securities in an S corporation (temporary).
Code of Federal Regulations, 2013 CFR
2013-04-01
....4975-11(c) and (d) of this chapter) that, for the nonallocation year, would otherwise have been added... the plan year. Thus, the fair market value of assets in the disqualified person's account that... on the value of the stock of the S corporation, such as appreciation in such value. Thus, synthetic...
26 CFR 1.409(p)-1T - Prohibited allocations of securities in an S corporation (temporary).
Code of Federal Regulations, 2011 CFR
2011-04-01
....4975-11(c) and (d) of this chapter) that, for the nonallocation year, would otherwise have been added... the plan year. Thus, the fair market value of assets in the disqualified person's account that... on the value of the stock of the S corporation, such as appreciation in such value. Thus, synthetic...
19 CFR 351.525 - Calculation of ad valorem subsidy rate and attribution of subsidy to a product.
Code of Federal Regulations, 2010 CFR
2010-04-01
... by dividing the amount of the benefit allocated to the period of investigation or review by the sales... under paragraph (b) of this section. Normally, the Secretary will determine the sales value of a product... product. (i) In general. If a subsidy is tied to the production or sale of a particular product, the...
International Co-operation for Educational Reform in the 1980s
ERIC Educational Resources Information Center
Adiseshiah, Malcolm S.
1978-01-01
Sets forth four general propositions for educational reform in the 1980s: (1) costs should be lower than the wasteful costs of the 1960s and 1970s; (2) dead wood from past programs should be eliminated; (3) new programs should not be added to the budget automatically; (4) money being wasted in non-productive programs should be allocated to…
Theory and methods for measuring the effective multiplication constant in ADS
NASA Astrophysics Data System (ADS)
Rugama Saez, Yolanda
2001-10-01
In the thesis an absolute measurements technique for the subcriticality determination is presented. The ADS is a hybrid system where a subcritical system is fed by a proton accelerator. There are different proposals to define an ADS, one is to use plutonium and minor actinides from power plants waste as fuel to be transmuted into non radioactive isotopes (transmuter/burner, ATW). Another proposal is to use a Th232-U233 cycle (Energy Amplifier), being that thorium is an interesting and abundant fertile isotope. The development of accelerator driven systems (ADS) requires the development of methods to monitor and control the subcriticality of this kind of system without interfering with its normal operation mode. With this finality, we have applied noise analysis techniques that allow us to characterise the system when it is operating. The method presented in this thesis is based on the stochastic neutron and photon transport theory that can be implemented by presently available neutron/photon transport codes. In this work, first we analyse the stochastic transport theory which has been applied to define a parameter to determine the subcritical reactivity monitoring measurements. Finally we give the main limitations and recommendations for these subcritical monitoring methodology. As a result of the theoretical methodology, done in the first part of this thesis, a monitoring measurement technique has been developed and verified using two coupled Monte Carlo programs. The first one, LAHET, simulates the spallation collisions and the high energy transport and the other, MCNP-DSP, is used to estimate the counting statistics from a neutron/photon ray counter in a fissile system, as well as the transport for neutron with energies less than 20 MeV. From the coupling of both codes we developed the LAHET/MCNP-DSP code which, has the capability to simulate the total process in the ADS from the proton interaction to the signal detector processing. In these simulations, we compute the cross power spectral densities between pairs of detectors located inside the system which, is defined as the measured parameter. From the comparison of the theoretical predictions with the Monte Carlo simulations, we obtain some practical and simple methods to determine the system multiplication constant. (Abstract shortened by UMI.)
Improving Allocation And Management Of The Health Workforce In Zambia.
Walsh, Fiona J; Musonda, Mutinta; Mwila, Jere; Prust, Margaret Lippitt; Vosburg, Kathryn Bradford; Fink, Günther; Berman, Peter; Rockers, Peter C
2017-05-01
Building a health workforce in low-income countries requires a focused investment of time and resources, and ministries of health need tools to create staffing plans and prioritize spending on staff for overburdened health facilities. In Zambia a demand-based workload model was developed to calculate the number of health workers required to meet demands for essential health services and inform a rational and optimized strategy for deploying new public-sector staff members to the country's health facilities. Between 2009 and 2011 Zambia applied this optimized deployment policy, allocating new health workers to areas with the greatest demand for services. The country increased its health worker staffing in districts with fewer than one health worker per 1,000 people by 25.2 percent, adding 949 health workers to facilities that faced severe staffing shortages. At facilities that had had low staffing levels, adding a skilled provider was associated with an additional 103 outpatient consultations per quarter. Policy makers in resource-limited countries should consider using strategic approaches to identifying and deploying a rational distribution of health workers to provide the greatest coverage of health services to their populations. Project HOPE—The People-to-People Health Foundation, Inc.
Hamilton, W; Turner-Bowker, D; Celebucki, C; Connolly, G
2002-01-01
Design: Expenditures on cigarette advertisements in national magazines in the USA are compared for three periods: January to November 1998, December 1998 to June 2000, and July 2000 to November 2001. Magazines in which at least 15% of readers are youth under age 18 are focused upon. Regression models test for the significance of period differences after controlling for seasonal and long term patterns. Data sources: Commercially maintained data on advertising in US magazines and on magazine readership by age. Key measures: Monthly cigarette ad expenditures in magazines with 15%+ youth readership, and monthly proportion of ad expenditures in 15%+ youth magazines. Results: Cigarette advertising expenditures in magazines with 15%+ youth readership increased dramatically after MSA implementation and fell dramatically after public pressure. The percentage allocation of expenditures to 15%+ magazines fell significantly in both periods. Results differ somewhat by company. Conclusions: The tobacco industry response to the MSA was at best modest, reducing proportional allocations of advertising to youth magazines but increasing the absolute amount of such advertising. The value of public pressure was seen in substantial reductions in both absolute and proportional spending on youth magazines, although not by all companies. PMID:12034983
NASA Astrophysics Data System (ADS)
López, Cristian; Zhong, Wei; Lu, Siliang; Cong, Feiyun; Cortese, Ignacio
2017-12-01
Vibration signals are widely used for bearing fault detection and diagnosis. When signals are acquired in the field, usually, the faulty periodic signal is weak and is concealed by noise. Various de-noising methods have been developed to extract the target signal from the raw signal. Stochastic resonance (SR) is a technique that changed the traditional denoising process, in which the weak periodic fault signal can be identified by adding an expression, the potential, to the raw signal and solving a differential equation problem. However, current SR methods have some deficiencies such us limited filtering performance, low frequency input signal and sequential search for optimum parameters. Consequently, in this study, we explore the application of SR based on the FitzHug-Nagumo (FHN) potential in rolling bearing vibration signals. Besides, we improve the search of the SR optimum parameters by the use of particle swarm optimization (PSO). The effectiveness of the proposed method is verified by using both simulated and real bearing data sets.
NASA Technical Reports Server (NTRS)
Mookerjee, P.; Molusis, J. A.; Bar-Shalom, Y.
1985-01-01
An investigation of the properties important for the design of stochastic adaptive controllers for the higher harmonic control of helicopter vibration is presented. Three different model types are considered for the transfer relationship between the helicopter higher harmonic control input and the vibration output: (1) nonlinear; (2) linear with slow time varying coefficients; and (3) linear with constant coefficients. The stochastic controller formulations and solutions are presented for a dual, cautious, and deterministic controller for both linear and nonlinear transfer models. Extensive simulations are performed with the various models and controllers. It is shown that the cautious adaptive controller can sometimes result in unacceptable vibration control. A new second order dual controller is developed which is shown to modify the cautious adaptive controller by adding numerator and denominator correction terms to the cautious control algorithm. The new dual controller is simulated on a simple single-control vibration example and is found to achieve excellent vibration reduction and significantly improves upon the cautious controller.
Sander, Beate; Nizam, Azhar; Garrison, Louis P.; Postma, Maarten J.; Halloran, M. Elizabeth; Longini, Ira M.
2013-01-01
Objectives To project the potential economic impact of pandemic influenza mitigation strategies from a societal perspective in the United States. Methods We use a stochastic agent-based model to simulate pandemic influenza in the community. We compare 17 strategies: targeted antiviral prophylaxis (TAP) alone and in combination with school closure as well as prevaccination. Results In the absence of intervention, we predict a 50% attack rate with an economic impact of $187 per capita as loss to society. Full TAP is the most effective single strategy, reducing number of cases by 54% at the lowest cost to society ($127 per capita). Prevaccination reduces number of cases by 48% and is the second least costly alternative ($140 per capita). Adding school closure to full TAP or prevaccination further improves health outcomes, but increases total cost to society by approximately $2700 per capita. Conclusion Full targeted antiviral prophylaxis is an effective and cost-saving measure for mitigating pandemic influenza. PMID:18671770
InterSpread Plus: a spatial and stochastic simulation model of disease in animal populations.
Stevenson, M A; Sanson, R L; Stern, M W; O'Leary, B D; Sujau, M; Moles-Benfell, N; Morris, R S
2013-04-01
We describe the spatially explicit, stochastic simulation model of disease spread, InterSpread Plus, in terms of its epidemiological framework, operation, and mode of use. The input data required by the model, the method for simulating contact and infection spread, and methods for simulating disease control measures are described. Data and parameters that are essential for disease simulation modelling using InterSpread Plus are distinguished from those that are non-essential, and it is suggested that a rational approach to simulating disease epidemics using this tool is to start with core data and parameters, adding additional layers of complexity if and when the specific requirements of the simulation exercise require it. We recommend that simulation models of disease are best developed as part of epidemic contingency planning so decision makers are familiar with model outputs and assumptions and are well-positioned to evaluate their strengths and weaknesses to make informed decisions in times of crisis. Copyright © 2012 Elsevier B.V. All rights reserved.
Neutral Community Dynamics and the Evolution of Species Interactions.
Coelho, Marco Túlio P; Rangel, Thiago F
2018-04-01
A contemporary goal in ecology is to determine the ecological and evolutionary processes that generate recurring structural patterns in mutualistic networks. One of the great challenges is testing the capacity of neutral processes to replicate observed patterns in ecological networks, since the original formulation of the neutral theory lacks trophic interactions. Here, we develop a stochastic-simulation neutral model adding trophic interactions to the neutral theory of biodiversity. Without invoking ecological differences among individuals of different species, and assuming that ecological interactions emerge randomly, we demonstrate that a spatially explicit multitrophic neutral model is able to capture the recurrent structural patterns of mutualistic networks (i.e., degree distribution, connectance, nestedness, and phylogenetic signal of species interactions). Nonrandom species distribution, caused by probabilistic events of migration and speciation, create nonrandom network patterns. These findings have broad implications for the interpretation of niche-based processes as drivers of ecological networks, as well as for the integration of network structures with demographic stochasticity.
NASA Astrophysics Data System (ADS)
Arfawi Kurdhi, Nughthoh; Adi Diwiryo, Toray; Sutanto
2016-02-01
This paper presents an integrated single-vendor two-buyer production-inventory model with stochastic demand and service level constraints. Shortage is permitted in the model, and partial backordered partial lost sale. The lead time demand is assumed follows a normal distribution and the lead time can be reduced by adding crashing cost. The lead time and ordering cost reductions are interdependent with logaritmic function relationship. A service level constraint policy corresponding to each buyer is considered in the model in order to limit the level of inventory shortages. The purpose of this research is to minimize joint total cost inventory model by finding the optimal order quantity, safety stock, lead time, and the number of lots delivered in one production run. The optimal production-inventory policy gained by the Lagrange method is shaped to account for the service level restrictions. Finally, a numerical example and effects of the key parameters are performed to illustrate the results of the proposed model.
Slowing and Loss of Complexity in Alzheimer's EEG: Two Sides of the Same Coin?
Dauwels, Justin; Srinivasan, K.; Ramasubba Reddy, M.; Musha, Toshimitsu; Vialatte, François-Benoît; Latchoumane, Charles; Jeong, Jaeseung; Cichocki, Andrzej
2011-01-01
Medical studies have shown that EEG of Alzheimer's disease (AD) patients is “slower” (i.e., contains more low-frequency power) and is less complex compared to age-matched healthy subjects. The relation between those two phenomena has not yet been studied, and they are often silently assumed to be independent. In this paper, it is shown that both phenomena are strongly related. Strong correlation between slowing and loss of complexity is observed in two independent EEG datasets: (1) EEG of predementia patients (a.k.a. Mild Cognitive Impairment; MCI) and control subjects; (2) EEG of mild AD patients and control subjects. The two data sets are from different patients, different hospitals and obtained through different recording systems. The paper also investigates the potential of EEG slowing and loss of EEG complexity as indicators of AD onset. In particular, relative power and complexity measures are used as features to classify the MCI and MiAD patients versus age-matched control subjects. When combined with two synchrony measures (Granger causality and stochastic event synchrony), classification rates of 83% (MCI) and 98% (MiAD) are obtained. By including the compression ratios as features, slightly better classification rates are obtained than with relative power and synchrony measures alone. PMID:21584257
A general CPL-AdS methodology for fixing dynamic parameters in dual environments.
Huang, De-Shuang; Jiang, Wen
2012-10-01
The algorithm of Continuous Point Location with Adaptive d-ary Search (CPL-AdS) strategy exhibits its efficiency in solving stochastic point location (SPL) problems. However, there is one bottleneck for this CPL-AdS strategy which is that, when the dimension of the feature, or the number of divided subintervals for each iteration, d is large, the decision table for elimination process is almost unavailable. On the other hand, the larger dimension of the features d can generally make this CPL-AdS strategy avoid oscillation and converge faster. This paper presents a generalized universal decision formula to solve this bottleneck problem. As a matter of fact, this decision formula has a wider usage beyond handling out this SPL problems, such as dealing with deterministic point location problems and searching data in Single Instruction Stream-Multiple Data Stream based on Concurrent Read and Exclusive Write parallel computer model. Meanwhile, we generalized the CPL-AdS strategy with an extending formula, which is capable of tracking an unknown dynamic parameter λ in both informative and deceptive environments. Furthermore, we employed different learning automata in the generalized CPL-AdS method to find out if faster learning algorithm will lead to better realization of the generalized CPL-AdS method. All of these aforementioned contributions are vitally important whether in theory or in practical applications. Finally, extensive experiments show that our proposed approaches are efficient and feasible.
A Nonlinear Dynamical Systems based Model for Stochastic Simulation of Streamflow
NASA Astrophysics Data System (ADS)
Erkyihun, S. T.; Rajagopalan, B.; Zagona, E. A.
2014-12-01
Traditional time series methods model the evolution of the underlying process as a linear or nonlinear function of the autocorrelation. These methods capture the distributional statistics but are incapable of providing insights into the dynamics of the process, the potential regimes, and predictability. This work develops a nonlinear dynamical model for stochastic simulation of streamflows. In this, first a wavelet spectral analysis is employed on the flow series to isolate dominant orthogonal quasi periodic timeseries components. The periodic bands are added denoting the 'signal' component of the time series and the residual being the 'noise' component. Next, the underlying nonlinear dynamics of this combined band time series is recovered. For this the univariate time series is embedded in a d-dimensional space with an appropriate lag T to recover the state space in which the dynamics unfolds. Predictability is assessed by quantifying the divergence of trajectories in the state space with time, as Lyapunov exponents. The nonlinear dynamics in conjunction with a K-nearest neighbor time resampling is used to simulate the combined band, to which the noise component is added to simulate the timeseries. We demonstrate this method by applying it to the data at Lees Ferry that comprises of both the paleo reconstructed and naturalized historic annual flow spanning 1490-2010. We identify interesting dynamics of the signal in the flow series and epochal behavior of predictability. These will be of immense use for water resources planning and management.
Reactive power planning under high penetration of wind energy using Benders decomposition
Xu, Yan; Wei, Yanli; Fang, Xin; ...
2015-11-05
This study addresses the optimal allocation of reactive power volt-ampere reactive (VAR) sources under the paradigm of high penetration of wind energy. Reactive power planning (RPP) in this particular condition involves a high level of uncertainty because of wind power characteristic. To properly model wind generation uncertainty, a multi-scenario framework optimal power flow that considers the voltage stability constraint under the worst wind scenario and transmission N 1 contingency is developed. The objective of RPP in this study is to minimise the total cost including the VAR investment cost and the expected generation cost. Therefore RPP under this condition ismore » modelled as a two-stage stochastic programming problem to optimise the VAR location and size in one stage, then to minimise the fuel cost in the other stage, and eventually, to find the global optimal RPP results iteratively. Benders decomposition is used to solve this model with an upper level problem (master problem) for VAR allocation optimisation and a lower problem (sub-problem) for generation cost minimisation. Impact of the potential reactive power support from doubly-fed induction generator (DFIG) is also analysed. Lastly, case studies on the IEEE 14-bus and 118-bus systems are provided to verify the proposed method.« less
Economic Impact of Harvesting Corn Stover under Time Constraint: The Case of North Dakota
Maung, Thein A.; Gustafson, Cole R.
2013-01-01
This study examines the impact of stochastic harvest field time on profit maximizing potential of corn cob/stover collection in North Dakota. Three harvest options are analyzed using mathematical programming models. Our findings show that under the first corn grain only harvest option, farmers are able to complete harvesting corn grain and achieve maximum net income in a fairly short amount of time with existing combine technology. However, under the second simultaneous corn grain and cob (one-pass) harvest option, farmers generate lower net income compared to the net income of the first option. This is due to the slowdown in combinemore » harvest capacity as a consequence of harvesting corn cobs. Under the third option of separate corn grain and stover (two-pass) harvest option, time allocation is the main challenge and our evidence shows that with limited harvest field time available, farmers find it optimal to allocate most of their time harvesting grain and then proceed to harvest and bale stover if time permits at the end of harvest season. The overall findings suggest is that it would be more economically efficient to allow a firm that is specialized in collecting biomass feedstock to participate in cob/stover harvest business.« less
Economic Impact of Harvesting Corn Stover under Time Constraint: The Case of North Dakota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maung, Thein A.; Gustafson, Cole R.
This study examines the impact of stochastic harvest field time on profit maximizing potential of corn cob/stover collection in North Dakota. Three harvest options are analyzed using mathematical programming models. Our findings show that under the first corn grain only harvest option, farmers are able to complete harvesting corn grain and achieve maximum net income in a fairly short amount of time with existing combine technology. However, under the second simultaneous corn grain and cob (one-pass) harvest option, farmers generate lower net income compared to the net income of the first option. This is due to the slowdown in combinemore » harvest capacity as a consequence of harvesting corn cobs. Under the third option of separate corn grain and stover (two-pass) harvest option, time allocation is the main challenge and our evidence shows that with limited harvest field time available, farmers find it optimal to allocate most of their time harvesting grain and then proceed to harvest and bale stover if time permits at the end of harvest season. The overall findings suggest is that it would be more economically efficient to allow a firm that is specialized in collecting biomass feedstock to participate in cob/stover harvest business.« less
Stochastic Models for Precipitable Water in Convection
NASA Astrophysics Data System (ADS)
Leung, Kimberly
Atmospheric precipitable water vapor (PWV) is the amount of water vapor in the atmosphere within a vertical column of unit cross-sectional area and is a critically important parameter of precipitation processes. However, accurate high-frequency and long-term observations of PWV in the sky were impossible until the availability of modern instruments such as radar. The United States Department of Energy (DOE)'s Atmospheric Radiation Measurement (ARM) Program facility made the first systematic and high-resolution observations of PWV at Darwin, Australia since 2002. At a resolution of 20 seconds, this time series allowed us to examine the volatility of PWV, including fractal behavior with dimension equal to 1.9, higher than the Brownian motion dimension of 1.5. Such strong fractal behavior calls for stochastic differential equation modeling in an attempt to address some of the difficulties of convective parameterization in various kinds of climate models, ranging from general circulation models (GCM) to weather research forecasting (WRF) models. This important observed data at high resolution can capture the fractal behavior of PWV and enables stochastic exploration into the next generation of climate models which considers scales from micrometers to thousands of kilometers. As a first step, this thesis explores a simple stochastic differential equation model of water mass balance for PWV and assesses accuracy, robustness, and sensitivity of the stochastic model. A 1000-day simulation allows for the determination of the best-fitting 25-day period as compared to data from the TWP-ICE field campaign conducted out of Darwin, Australia in early 2006. The observed data and this portion of the simulation had a correlation coefficient of 0.6513 and followed similar statistics and low-resolution temporal trends. Building on the point model foundation, a similar algorithm was applied to the National Center for Atmospheric Research (NCAR)'s existing single-column model as a test-of-concept for eventual inclusion in a general circulation model. The stochastic scheme was designed to be coupled with the deterministic single-column simulation by modifying results of the existing convective scheme (Zhang-McFarlane) and was able to produce a 20-second resolution time series that effectively simulated observed PWV, as measured by correlation coefficient (0.5510), fractal dimension (1.9), statistics, and visual examination of temporal trends. Results indicate that simulation of a highly volatile time series of observed PWV is certainly achievable and has potential to improve prediction capabilities in climate modeling. Further, this study demonstrates the feasibility of adding a mathematics- and statistics-based stochastic scheme to an existing deterministic parameterization to simulate observed fractal behavior.
BELOWGROUND NITROGEN UPTAKE AND ALLOCATION ...
Anthropogenic nitrogen inputs coupled with rising sea level complicate predictions of marsh stability. As marsh stability is a function of its vegetation, it is important to understand the mechanisms that drive community dynamics. Many studies have examined aboveground dynamics and nutrient cycling, but few have studied the belowground uptake and allocation of nitrogen. Literature suggests that D. spicata may dominate the marsh platform in nutrient-rich conditions, though the mechanism driving the vegetation shift is unclear. Our study examines belowground nutrient uptake and allocation underlying these patterns. To determine whether D. spicata is a more efficient scavenger of nutrients than S. alterniflora we performed a 15N pulse-chase experiment. Tracer was added to mesocosms growing D. spicata and S. alterniflora in monoculture. After the initial pulse, a subset of pots were sacrificed weekly and partitioned into detailed depth intervals for 15N analysis of several belowground pools: live coarse and fine roots, live rhizomes, dead organic matter, and bulk sediment. Comparisons between D. spicata and S. alterniflora uptake and allocation can explain mechanisms of competitive advantage and predictions of D. spicata dominance. Additionally, we used denitrification enzyme assays (DEA) and greenhouse gas slurries to quantify denitrification rates and potentials. Initial results suggest that the vegetation types support similar N-relevant microbial communities. Th
Durkin, Sarah; Wakefield, Melanie; Spittal, Matt
2006-12-01
In the context of concerns about unintended "boomerang" influences of advertising, this study aimed to examine effects of nicotine replacement therapy (NRT) and Zyban advertising on youth perceptions of the ease of quitting, health risks of smoking and future intentions to smoke. 718 youth aged 14-16years were randomly allocated to view four television ads promoting either: NRT; Zyban; non-pharmaceutical cessation services (telephone Quitline); or non-cessation messages on sun protection. Questionnaire measures were administered before and after viewing ads. There were no effects of advertising exposure on perceived health effects of smoking or intentions to smoke. Compared with the sun protection ads, but not the Quitline ads, those exposed to NRT ads reported stronger perceptions about the ease of quitting, but non-susceptible non-smokers primarily drove this difference. This study suggests that exposure to NRT and Zyban advertising in an experimental context does not reliably influence youth smoking-related beliefs, especially those vulnerable to becoming regular smokers.
Using stochastic dynamic programming to support catchment-scale water resources management in China
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Pereira-Cardenal, Silvio Javier; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2013-04-01
A hydro-economic modelling approach is used to optimize reservoir management at river basin level. We demonstrate the potential of this integrated approach on the Ziya River basin, a complex basin on the North China Plain south-east of Beijing. The area is subject to severe water scarcity due to low and extremely seasonal precipitation, and the intense agricultural production is highly dependent on irrigation. Large reservoirs provide water storage for dry months while groundwater and the external South-to-North Water Transfer Project are alternative sources of water. An optimization model based on stochastic dynamic programming has been developed. The objective function is to minimize the total cost of supplying water to the users, while satisfying minimum ecosystem flow constraints. Each user group (agriculture, domestic and industry) is characterized by fixed demands, fixed water allocation costs for the different water sources (surface water, groundwater and external water) and fixed costs of water supply curtailment. The multiple reservoirs in the basin are aggregated into a single reservoir to reduce the dimensions of decisions. Water availability is estimated using a hydrological model. The hydrological model is based on the Budyko framework and is forced with 51 years of observed daily rainfall and temperature data. 23 years of observed discharge from an in-situ station located downstream a remote mountainous catchment is used for model calibration. Runoff serial correlation is described by a Markov chain that is used to generate monthly runoff scenarios to the reservoir. The optimal costs at a given reservoir state and stage were calculated as the minimum sum of immediate and future costs. Based on the total costs for all states and stages, water value tables were generated which contain the marginal value of stored water as a function of the month, the inflow state and the reservoir state. The water value tables are used to guide allocation decisions in simulation mode. The performance of the operation rules based on water value tables was evaluated. The approach was used to assess the performance of alternative development scenarios and infrastructure projects successfully in the case study region.
NASA Astrophysics Data System (ADS)
Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em
2017-09-01
We develop a new robust methodology for the stochastic Navier-Stokes equations based on the dynamically-orthogonal (DO) and bi-orthogonal (BO) methods [1-3]. Both approaches are variants of a generalized Karhunen-Loève (KL) expansion in which both the stochastic coefficients and the spatial basis evolve according to system dynamics, hence, capturing the low-dimensional structure of the solution. The DO and BO formulations are mathematically equivalent [3], but they exhibit computationally complimentary properties. Specifically, the BO formulation may fail due to crossing of the eigenvalues of the covariance matrix, while both BO and DO become unstable when there is a high condition number of the covariance matrix or zero eigenvalues. To this end, we combine the two methods into a robust hybrid framework and in addition we employ a pseudo-inverse technique to invert the covariance matrix. The robustness of the proposed method stems from addressing the following issues in the DO/BO formulation: (i) eigenvalue crossing: we resolve the issue of eigenvalue crossing in the BO formulation by switching to the DO near eigenvalue crossing using the equivalence theorem and switching back to BO when the distance between eigenvalues is larger than a threshold value; (ii) ill-conditioned covariance matrix: we utilize a pseudo-inverse strategy to invert the covariance matrix; (iii) adaptivity: we utilize an adaptive strategy to add/remove modes to resolve the covariance matrix up to a threshold value. In particular, we introduce a soft-threshold criterion to allow the system to adapt to the newly added/removed mode and therefore avoid repetitive and unnecessary mode addition/removal. When the total variance approaches zero, we show that the DO/BO formulation becomes equivalent to the evolution equation of the Optimally Time-Dependent modes [4]. We demonstrate the capability of the proposed methodology with several numerical examples, namely (i) stochastic Burgers equation: we analyze the performance of the method in the presence of eigenvalue crossing and zero eigenvalues; (ii) stochastic Kovasznay flow: we examine the method in the presence of a singular covariance matrix; and (iii) we examine the adaptivity of the method for an incompressible flow over a cylinder where for large stochastic forcing thirteen DO/BO modes are active.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 0.2 nautical miles; (iv) The aircraft's SDA must be 2; and (v) The aircraft's SIL must be 3. (2... geometric position no later than 2.0 seconds from the time of measurement of the position to the time of transmission. (2) Within the 2.0 total latency allocation, a maximum of 0.6 seconds can be uncompensated...
R and D Resource Allocations by Selected Foreign Countries,
1974-01-01
sustain themselves by innovations which have their source in theory . "’ Prior to the second half of the nine- j teenth century it was rare to find...Tabie r:5,"d 0 D, Chm~lgin Prioritios for Go... ugt R&D, AN BIG SCEC /EENN SR Eorscm AS, A PECNTG OFGPI-9116,AD16 115 The OECD has used one other
Balasubramanian, Hari; Biehl, Sebastian; Dai, Longjie; Muriel, Ana
2014-03-01
Appointments in primary care are of two types: 1) prescheduled appointments, which are booked in advance of a given workday; and 2) same-day appointments, which are booked as calls come during the workday. The challenge for practices is to provide preferred time slots for prescheduled appointments and yet see as many same-day patients as possible during regular work hours. It is also important, to the extent possible, to match same-day patients with their own providers (so as to maximize continuity of care). In this paper, we present a mathematical framework (a stochastic dynamic program) for same-day patient allocation in multi-physician practices in which calls for same-day appointments come in dynamically over a workday. Allocation decisions have to be made in the presence of prescheduled appointments and without complete demand information. The objective is to maximize a weighted measure that includes the number of same-day patients seen during regular work hours as well as the continuity provided to these patients. Our experimental design is motivated by empirical data we collected at a 3-provider family medicine practice in Massachusetts. Our results show that the location of prescheduled appointments - i.e. where in the day these appointments are booked - has a significant impact on the number of same-day patients a practice can see during regular work hours, as well as the continuity the practice is able to provide. We find that a 2-Blocks policy which books prescheduled appointments in two clusters - early morning and early afternoon - works very well. We also provide a simple, easily implementable policy for schedulers to assign incoming same-day requests to appointment slots. Our results show that this policy provides near-optimal same-day assignments in a variety of settings.
Modified allocation capacitated planning model in blood supply chain management
NASA Astrophysics Data System (ADS)
Mansur, A.; Vanany, I.; Arvitrida, N. I.
2018-04-01
Blood supply chain management (BSCM) is a complex process management that involves many cooperating stakeholders. BSCM involves four echelon processes, which are blood collection or procurement, production, inventory, and distribution. This research develops an optimization model of blood distribution planning. The efficiency of decentralization and centralization policies in a blood distribution chain are compared, by optimizing the amount of blood delivered from a blood center to a blood bank. This model is developed based on allocation problem of capacitated planning model. At the first stage, the capacity and the cost of transportation are considered to create an initial capacitated planning model. Then, the inventory holding and shortage costs are added to the model. These additional parameters of inventory costs lead the model to be more realistic and accurate.
Vogt, Florian M; Theysohn, Jens M; Michna, Dariusz; Hunold, Peter; Neudorf, Ulrich; Kinner, Sonja; Barkhausen, Jörg; Quick, Harald H
2013-09-01
To evaluate time-resolved interleaved stochastic trajectories (TWIST) contrast-enhanced 4D magnetic resonance angiography (MRA) and compare it with 3D FLASH MRA in patients with congenital heart and vessel anomalies. Twenty-six patients with congenital heart and vessel anomalies underwent contrast-enhanced MRA with both 3D FLASH and 4D TWIST MRA. Images were subjectively evaluated regarding total image quality, artefacts, diagnostic value and added diagnostic value of 4D dynamic imaging. Quantitative comparison included signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and vessel sharpness measurements. Three-dimensional FLASH MRA was judged to be significantly better in terms of image quality (4.0 ± 0.6 vs 3.4 ± 0.6, P < 0.05) and artefacts (3.8 ± 0.4 vs 3.3 ± 0.5, P < 0.05); no difference in diagnostic value was found (4.2 ± 0.4 vs 4.0 ± 0.4); important additional functional information was found in 21/26 patients. SNR and CNR were higher in the pulmonary trunk in 4D TWIST, but slightly higher in the systemic arteries in 3D FLASH. No difference in vessel sharpness delineation was found. Although image quality was inferior compared with 3D FLASH MRA, 4D TWIST MRA yields robust images and added diagnostic value through dynamic acquisition was found. Thus, 4D TWIST MRA is an attractive alternative to 3D FLASH MRA. • New magnetic resonance angiography (MRA) techniques are increasingly introduced for congenital cardiovascular problems. • Time-resolved angiography with interleaved stochastic trajectories (TWIST) is an example. • Four-dimensional TWIST MRA provided inferior image quality compared to 3D FLASH MRA but without significant difference in vessel sharpness. • Four-dimensional TWIST MRA gave added diagnostic value.
NASA Astrophysics Data System (ADS)
Zhao, J.; Cai, X.; Wang, Z.
2009-12-01
It also has been well recognized that market-based systems can have significant advantages over administered systems for water allocation. However there are not many successful water markets around the world yet and administered systems exist commonly in water allocation management practice. This paradox has been under discussion for decades and still calls for attention for both research and practice. This paper explores some insights for the paradox and tries to address why market systems have not been widely implemented for water allocation. Adopting the theory of agent-based system we develop a consistent analytical model to interpret both systems. First we derive some theorems based on the analytical model, with respect to the necessary conditions for economic efficiency of water allocation. Following that the agent-based model is used to illustrate the coherence and difference between administered and market-based systems. The two systems are compared from three aspects: 1) the driving forces acting on the system state, 2) system efficiency, and 3) equity. Regarding economic efficiency, penalty on the violation of water use permits (or rights) under an administered system can lead to system-wide economic efficiency, as well as being acceptable by some agents, which follows the theory of the so-call rational violation. Ideal equity will be realized if penalty equals incentive with an administered system and if transaction costs are zero with a market system. The performances of both agents and the over system are explained with an administered system and market system, respectively. The performances of agents are subject to different mechanisms of interactions between agents under the two systems. The system emergency (i.e., system benefit, equilibrium market price, etc), resulting from the performance at the agent level, reflects the different mechanism of the two systems, the “invisible hand” with the market system and administrative measures (penalty and subsidy) with the administered system. Furthermore, the impact of hydrological uncertainty on the performance of water users under the two systems is analyzed by extending the deterministic model to a stochastic one subject to the uncertainty of water availability. It is found that the system response to hydrologic uncertainty depends on risk management mechanics - sharing risk equally among the agents or by prescribed priorities on some agents. Figure1. Agent formulation and its implications in administered system and market-based system
NASA Astrophysics Data System (ADS)
McWilliams, J. C.; Lane, E.; Melville, K.; Restrepo, J.; Sullivan, P.
2004-12-01
Oceanic surface gravity waves are approximately irrotational, weakly nonlinear, and conservative, and they have a much shorter time scale than oceanic currents and longer waves (e.g., infragravity waves) --- except where the primary surface waves break. This provides a framework for an asymptotic theory, based on separation of time (and space) scales, of wave-averaged effects associated with the conservative primary wave dynamics combined with a stochastic representation of the momentum transfer and induced mixing associated with non-conservative wave breaking. Such a theory requires only modest information about the primary wave field from measurements or operational model forecasts and thus avoids the enormous burden of calculating the waves on their intrinsically small space and time scales. For the conservative effects, the result is a vortex force associated with the primary wave's Stokes drift; a wave-averaged Bernoulli head and sea-level set-up; and an incremental material advection by the Stokes drift. This can be compared to the "radiation stress" formalism of Longuet-Higgins, Stewart, and Hasselmann; it is shown to be a preferable representation since the radiation stress is trivial at its apparent leading order. For the non-conservative breaking effects, a population of stochastic impulses is added to the current and infragravity momentum equations with distribution functions taken from measurements. In offshore wind-wave equilibria, these impulses replace the conventional surface wind stress and cause significant differences in the surface boundary layer currents and entrainment rate, particularly when acting in combination with the conservative vortex force. In the surf zone, where breaking associated with shoaling removes nearly all of the primary wave momentum and energy, the stochastic forcing plays an analogous role as the widely used nearshore radiation stress parameterizations. This talk describes the theoretical framework and presents some preliminary solutions using it. McWilliams, J.C., J.M. Restrepo, & E.M. Lane, 2004: An asymptotic theory for the interaction of waves and currents in coastal waters. J. Fluid Mech. 511, 135-178. Sullivan, P.P., J.C. McWilliams, & W.K. Melville, 2004: The oceanic boundary layer driven by wave breaking with stochastic variability. J. Fluid Mech. 507, 143-174.
NASA Astrophysics Data System (ADS)
Krogh-Madsen, Trine; Kold Taylor, Louise; Skriver, Anne D.; Schaffer, Peter; Guevara, Michael R.
2017-09-01
The transmembrane potential is recorded from small isopotential clusters of 2-4 embryonic chick ventricular cells spontaneously generating action potentials. We analyze the cycle-to-cycle fluctuations in the time between successive action potentials (the interbeat interval or IBI). We also convert an existing model of electrical activity in the cluster, which is formulated as a Hodgkin-Huxley-like deterministic system of nonlinear ordinary differential equations describing five individual ionic currents, into a stochastic model consisting of a population of ˜20 000 independently and randomly gating ionic channels, with the randomness being set by a real physical stochastic process (radio static). This stochastic model, implemented using the Clay-DeFelice algorithm, reproduces the fluctuations seen experimentally: e.g., the coefficient of variation (standard deviation/mean) of IBI is 4.3% in the model vs. the 3.9% average value of the 17 clusters studied. The model also replicates all but one of several other quantitative measures of the experimental results, including the power spectrum and correlation integral of the voltage, as well as the histogram, Poincaré plot, serial correlation coefficients, power spectrum, detrended fluctuation analysis, approximate entropy, and sample entropy of IBI. The channel noise from one particular ionic current (IKs), which has channel kinetics that are relatively slow compared to that of the other currents, makes the major contribution to the fluctuations in IBI. Reproduction of the experimental coefficient of variation of IBI by adding a Gaussian white noise-current into the deterministic model necessitates using an unrealistically high noise-current amplitude. Indeed, a major implication of the modelling results is that, given the wide range of time-scales over which the various species of channels open and close, only a cell-specific stochastic model that is formulated taking into consideration the widely different ranges in the frequency content of the channel-noise produced by the opening and closing of several different types of channels will be able to reproduce precisely the various effects due to membrane noise seen in a particular electrophysiological preparation.
2013-05-01
Niacin has potentially favourable effects on lipids, but its effect on cardiovascular outcomes is uncertain. HPS2-THRIVE is a large randomized trial assessing the effects of extended release (ER) niacin in patients at high risk of vascular events. Prior to randomization, 42 424 patients with occlusive arterial disease were given simvastatin 40 mg plus, if required, ezetimibe 10 mg daily to standardize their low-density lipoprotein (LDL)-lowering therapy. The ability to remain compliant with ER niacin 2 g plus laropiprant 40 mg daily (ERN/LRPT) for ~1 month was then assessed in 38 369 patients and about one-third were excluded (mainly due to niacin side effects). A total of 25 673 patients were randomized between ERN/LRPT daily vs. placebo and were followed for a median of 3.9 years. By the end of the study, 25% of participants allocated ERN/LRPT vs. 17% allocated placebo had stopped their study treatment. The most common medical reasons for stopping ERN/LRPT were related to skin, gastrointestinal, diabetes, and musculoskeletal side effects. When added to statin-based LDL-lowering therapy, allocation to ERN/LRPT increased the risk of definite myopathy [75 (0.16%/year) vs. 17 (0.04%/year): risk ratio 4.4; 95% CI 2.6-7.5; P < 0.0001]; 7 vs. 5 were rhabdomyolysis. Any myopathy (definite or incipient) was more common among participants in China [138 (0.66%/year) vs. 27 (0.13%/year)] than among those in Europe [17 (0.07%/year) vs. 11 (0.04%/year)]. Consecutive alanine transaminase >3× upper limit of normal, in the absence of muscle damage, was seen in 48 (0.10%/year) ERN/LRPT vs. 30 (0.06%/year) placebo allocated participants. The risk of myopathy was increased by adding ERN/LRPT to simvastatin 40 mg daily (with or without ezetimibe), particularly in Chinese patients whose myopathy rates on simvastatin were higher. Despite the side effects of ERN/LRPT, among individuals who were able to tolerate it for ~1 month, three-quarters continued to take it for ~4 years.
26 CFR 1.907(c)-3 - FOGEI and FORI taxes (for taxable years beginning after December 31, 1982).
Code of Federal Regulations, 2012 CFR
2012-04-01
... such a percentage of value solely for purposes of making the tax allocation in paragraph (a)(4) of this... creditable taxes under section 901, that the fair market value of the oil at the port is $10 per barrel, and... added to the oil beyond the well-head which is part of Y's tax base ($10-$9). (v) The royalty deductions...
Siirala, Eriikka; Peltonen, Laura-Maria; Lundgrén-Laine, Heljä; Salanterä, Sanna; Junttila, Kristiina
2016-09-01
To describe the tactical and the operational decisions made by nurse managers when managing the daily unit operation in peri-operative settings. Management is challenging as situations change rapidly and decisions are constantly made. Understanding decision-making in this complex environment helps to develop decision support systems to support nurse managers' operative and tactical decision-making. Descriptive cross-sectional design. Data were collected from 20 nurse managers with the think-aloud method during the busiest working hours and analysed using thematic content analysis. Nurse managers made over 700 decisions; either ad hoc (n = 289), near future (n = 268) or long-term (n = 187) by nature. Decisions were often made simultaneously with many interruptions. Ad hoc decisions covered staff allocation, ensuring adequate staff, rescheduling surgical procedures, confirmation tangible resources and following-up the daily unit operation. Decisions in the near future were: planning of surgical procedures and tangible resources, and planning staff allocation. Long-term decisions were: human recourses, nursing development, supplies and equipment, and finances in the unit. Decision-making was vulnerable to interruptions, which sometimes complicated the managing tasks. The results can be used when planning decision support systems and when defining the nurse managers' tasks in peri-operative settings. © 2016 John Wiley & Sons Ltd.
OPTN/SRTR 2013 Annual Data Report: lung.
Valapour, M; Skeans, M A; Heubner, B M; Smith, J M; Hertz, M I; Edwards, L B; Cherikh, W S; Callahan, E R; Snyder, J J; Israni, A K; Kasiske, B L
2015-01-01
Lungs are allocated to adult and adolescent transplant candidates (aged ⩾ 12 years) on the basis of age, geography, blood type compatibility, and the lung allocation score (LAS), which reflects risk of waitlist mortality and probability of posttransplant survival. In 2013, the most adult candidates, 2394, of any year were added to the list. Overall median waiting time for candidates listed in 2013 was 4.0 months. The preferred procedure remained bilateral lung transplant, representing approximately 70% of lung transplants in 2013. Measures of short-term and longterm survival have plateaued since the implementation of the LAS in 2005. The number of new child candidates (aged 0-11 years) added to the lung transplant waiting list increased to 39 in 2013. A total of 28 lung transplants were performed in child recipients, 3 for ages younger than 1 year, 9 for ages 1 to 5 years, and 16 for ages 6 to 11 years. The diagnosis of pulmonary hypertension was associated with higher survival rates than cystic fibrosis or other diagnosis (pulmonary fibrosis, bronchiolitis obliterans, bronchopulmonary dysplasia). For child candidates, infection was the leading cause of death in year 1 posttransplant and graft failure in years 2 to 5. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.
Montovan, Kathryn J; Karst, Nathaniel; Jones, Laura E; Seeley, Thomas D
2013-11-07
In the beeswax combs of honey bees, the cells of brood, pollen, and honey have a consistent spatial pattern that is sustained throughout the life of a colony. This spatial pattern is believed to emerge from simple behavioral rules that specify how the queen moves, where foragers deposit honey/pollen and how honey/pollen is consumed from cells. Prior work has shown that a set of such rules can explain the formation of the allocation pattern starting from an empty comb. We show that these rules cannot maintain the pattern once the brood start to vacate their cells, and we propose new, biologically realistic rules that better sustain the observed allocation pattern. We analyze the three resulting models by performing hundreds of simulation runs over many gestational periods and a wide range of parameter values. We develop new metrics for pattern assessment and employ them in analyzing pattern retention over each simulation run. Applied to our simulation results, these metrics show alteration of an accepted model for honey/pollen consumption based on local information can stabilize the cell allocation pattern over time. We also show that adding global information, by biasing the queen's movements towards the center of the comb, expands the parameter regime over which pattern retention occurs. © 2013 Published by Elsevier Ltd. All rights reserved.
Aboulafia-Brakha, T; Suchecki, D; Gouveia-Paulino, F; Nitrini, R; Ptak, R
2014-01-01
Family caregivers of patients with dementia frequently experience psychological stress, depression and disturbed psychophysiological activity, with increased levels of diurnal cortisol secretion. To compare the effects of a cognitive-behavioural group therapy (CBT) to a psychoeducation group programme (EDUC) on cortisol secretion in caregivers of patients with moderate Alzheimer's disease (AD). Caregivers of AD outpatients were semi-randomly allocated to one of two intervention programmes (CBT or EDUC) consisting of eight weekly sessions. Twenty-six participants completed the study. Before and after intervention, salivary cortisol was collected at four different times of the day. Effects of the interventions were evaluated with self-report psychological scales and questionnaires related to functional abilities and neuropsychiatric symptoms of the AD relative. Only in the CBT group did salivary cortisol levels significantly decrease after intervention, with a large effect size and high achieved power. Both groups reported a reduction of neuropsychiatric symptoms of their AD relative after intervention. Psychoeducation for caregivers may contribute to a reduction of neuropsychiatric symptoms of AD patients while CBT additionally attenuates psychophysiological responses to stressful situations in caregivers, by reducing diurnal cortisol levels. This may lead to a positive impact in the general health of the caregiver, eventually resulting in better care of the AD patient.
Impaired Processing of Serial Order Determines Working Memory Impairments in Alzheimer's Disease.
De Belder, Maya; Santens, Patrick; Sieben, Anne; Fias, Wim
2017-01-01
Working memory (WM) problems are commonly observed in Alzheimer's disease (AD), but the affected mechanisms leading to impaired WM are still insufficiently understood. The ability to efficiently process serial order in WM has been demonstrated to be fundamental to fluent daily life functioning. The decreased capability to mentally process serial position in WM has been put forward as the underlying explanation for generally compromised WM performance. Determine which mechanisms, such as order processing, are responsible for deficient WM functioning in AD. A group of AD patients (n = 32) and their partners (n = 25), assigned to the control group, were submitted to an extensive battery of neuropsychological and experimental tasks, assessing general cognitive state and functioning of several aspects related to serial order WM. The results revealed an impaired ability to bind item information to serial position within WM in AD patients compared to controls. It was additionally observed that AD patients experienced specific difficulties with directing spatial attention when searching for item information stored in WM. The processing of serial order and the allocation of attentional resources are both disrupted, explaining the generally reduced WM functioning in AD patients. Further studies should now clarify whether this observation could explain disease-related problems for other cognitive functions such as verbal expression, auditory comprehension, or planning.
Hannouche, Ali; Chebbo, Ghassan; Joannis, Claude; Gasperi, Johnny; Gromaire, Marie-Christine; Moilleron, Régis; Barraud, Sylvie; Ruban, Véronique
2017-12-01
This article describes a stochastic method to calculate the annual pollutant loads and its application over several years at the outlet of three catchments drained by separate storm sewers. A stochastic methodology using Monte Carlo simulations is proposed for assessing annual pollutant load, as well as the associated uncertainties, from a few event sampling campaigns and/or continuous turbidity measurements (representative of the total suspended solids concentration (TSS)). Indeed, in the latter case, the proposed method takes into account the correlation between pollutants and TSS. The developed method was applied to data acquired within the French research project "INOGEV" (innovations for a sustainable management of urban water) at the outlet of three urban catchments drained by separate storm sewers. Ten or so event sampling campaigns for a large range of pollutants (46 pollutants and 2 conventional water quality parameters: TSS and total organic carbon (TOC)) are combined with hundreds of rainfall events for which, at least one among three continuously monitored parameters (rainfall intensity, flow rate, and turbidity) is available. Results obtained for the three catchments show that the annual pollutant loads can be estimated with uncertainties ranging from 10 to 60%, and the added value of turbidity monitoring for lowering the uncertainty is demonstrated. A low inter-annual and inter-site variability of pollutant loads, for many of studied pollutants, is observed with respect to the estimated uncertainties, and can be explained mainly by annual precipitation.
A Model for Temperature Fluctuations in a Buoyant Plume
NASA Astrophysics Data System (ADS)
Bisignano, A.; Devenish, B. J.
2015-11-01
We present a hybrid Lagrangian stochastic model for buoyant plume rise from an isolated source that includes the effects of temperature fluctuations. The model is based on that of Webster and Thomson (Atmos Environ 36:5031-5042, 2002) in that it is a coupling of a classical plume model in a crossflow with stochastic differential equations for the vertical velocity and temperature (which are themselves coupled). The novelty lies in the addition of the latter stochastic differential equation. Parametrizations of the plume turbulence are presented that are used as inputs to the model. The root-mean-square temperature is assumed to be proportional to the difference between the centreline temperature of the plume and the ambient temperature. The constant of proportionality is tuned by comparison with equivalent statistics from large-eddy simulations (LES) of buoyant plumes in a uniform crossflow and linear stratification. We compare plume trajectories for a wide range of crossflow velocities and find that the model generally compares well with the equivalent LES results particularly when added mass is included in the model. The exception occurs when the crossflow velocity component becomes very small. Comparison of the scalar concentration, both in terms of the height of the maximum concentration and its vertical spread, shows similar behaviour. The model is extended to allow for realistic profiles of ambient wind and temperature and the results are compared with LES of the plume that emanated from the explosion and fire at the Buncefield oil depot in 2005.
NASA Astrophysics Data System (ADS)
Schönborn, Jan Boyke; Saalfrank, Peter; Klamroth, Tillmann
2016-01-01
We combine the stochastic pulse optimization (SPO) scheme with the time-dependent configuration interaction singles method in order to control the high frequency response of a simple molecular model system to a tailored femtosecond laser pulse. For this purpose, we use H2 treated in the fixed nuclei approximation. The SPO scheme, as similar genetic algorithms, is especially suited to control highly non-linear processes, which we consider here in the context of high harmonic generation. Here, we will demonstrate that SPO can be used to realize a "non-harmonic" response of H2 to a laser pulse. Specifically, we will show how adding low intensity side frequencies to the dominant carrier frequency of the laser pulse and stochastically optimizing their contribution can create a high-frequency spectral signal of significant intensity, not harmonic to the carrier frequency. At the same time, it is possible to suppress the harmonic signals in the same spectral region, although the carrier frequency is kept dominant during the optimization.
Renormalizing a viscous fluid model for large scale structure formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Führer, Florian; Rigopoulos, Gerasimos, E-mail: fuhrer@thphys.uni-heidelberg.de, E-mail: gerasimos.rigopoulos@ncl.ac.uk
2016-02-01
Using the Stochastic Adhesion Model (SAM) as a simple toy model for cosmic structure formation, we study renormalization and the removal of the cutoff dependence from loop integrals in perturbative calculations. SAM shares the same symmetry with the full system of continuity+Euler equations and includes a viscosity term and a stochastic noise term, similar to the effective theories recently put forward to model CDM clustering. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, they are necessarily non-local in time. To ensure Galilean Invariance higher ordermore » vertices related to the viscosity and the noise must then be added and we explicitly show at one-loop that these terms act as counter terms for vertex diagrams. The Ward Identities ensure that the non-local-in-time theory can be renormalized consistently. Another possibility is to include the viscosity in the linear propagator, resulting in exponential damping at high wavenumber. The resulting local-in-time theory is then renormalizable to one loop, requiring less free parameters for its renormalization.« less
Sparse cliques trump scale-free networks in coordination and competition
Gianetto, David A.; Heydari, Babak
2016-01-01
Cooperative behavior, a natural, pervasive and yet puzzling phenomenon, can be significantly enhanced by networks. Many studies have shown how global network characteristics affect cooperation; however, it is difficult to understand how this occurs based on global factors alone, low-level network building blocks, or motifs are necessary. In this work, we systematically alter the structure of scale-free and clique networks and show, through a stochastic evolutionary game theory model, that cooperation on cliques increases linearly with community motif count. We further show that, for reactive stochastic strategies, network modularity improves cooperation in the anti-coordination Snowdrift game and the Prisoner’s Dilemma game but not in the Stag Hunt coordination game. We also confirm the negative effect of the scale-free graph on cooperation when effective payoffs are used. On the flip side, clique graphs are highly cooperative across social environments. Adding cycles to the acyclic scale-free graph increases cooperation when multiple games are considered; however, cycles have the opposite effect on how forgiving agents are when playing the Prisoner’s Dilemma game. PMID:26899456
Non-Pharmacological Countermeasure to Decrease Landing Sickness and Improve Functional Performance
NASA Technical Reports Server (NTRS)
Rosenberg, M. J. F.; Kreutzberg, G. A.; Galvan-Garza, R. C.; Mulavara, A. P.; Reschke, M. F.
2017-01-01
Upon return from long-duration spaceflight, 100% of crewmembers experience motion sickness (MS) symptoms. The interactions between crewmembers' adaptation to a gravitational transition, the performance decrements resulting from MS and/or use of promethazine (PMZ), and the constraints imposed by mission task demands could significantly challenge and limit an astronaut's ability to perform functional tasks during gravitational transitions. Stochastic resonance (SR) is "noise benefit": adding noise to a system might increase the information (examples to the left and above). Stochastic vestibular stimulation (SVS), or low levels of noise applied to the vestibular system, improves balance and locomotor performance (Goel et al. 2015, Mulavara et al. 2011, 2015). In hemi-lesioned rat models, Samoudi et al. 2012 found that SVS increased GABA release on the lesioned, but not the intact side. Activation of the GABA pathway is important in modulating MS and promoting adaptability (Cohen 2008) and was seen to reverse MS symptoms in rats after unilateral labyrinthectomy (Magnusson et al. 2000). Thus, SVS could be used to promote GABA pathways to reduce MS and promote adaptability, eliminate the need for PMZ or other performance-inhibiting drugs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schönborn, Jan Boyke; Saalfrank, Peter; Klamroth, Tillmann, E-mail: klamroth@uni-potsdam.de
2016-01-28
We combine the stochastic pulse optimization (SPO) scheme with the time-dependent configuration interaction singles method in order to control the high frequency response of a simple molecular model system to a tailored femtosecond laser pulse. For this purpose, we use H{sub 2} treated in the fixed nuclei approximation. The SPO scheme, as similar genetic algorithms, is especially suited to control highly non-linear processes, which we consider here in the context of high harmonic generation. Here, we will demonstrate that SPO can be used to realize a “non-harmonic” response of H{sub 2} to a laser pulse. Specifically, we will show howmore » adding low intensity side frequencies to the dominant carrier frequency of the laser pulse and stochastically optimizing their contribution can create a high-frequency spectral signal of significant intensity, not harmonic to the carrier frequency. At the same time, it is possible to suppress the harmonic signals in the same spectral region, although the carrier frequency is kept dominant during the optimization.« less
Sparse cliques trump scale-free networks in coordination and competition
NASA Astrophysics Data System (ADS)
Gianetto, David A.; Heydari, Babak
2016-02-01
Cooperative behavior, a natural, pervasive and yet puzzling phenomenon, can be significantly enhanced by networks. Many studies have shown how global network characteristics affect cooperation; however, it is difficult to understand how this occurs based on global factors alone, low-level network building blocks, or motifs are necessary. In this work, we systematically alter the structure of scale-free and clique networks and show, through a stochastic evolutionary game theory model, that cooperation on cliques increases linearly with community motif count. We further show that, for reactive stochastic strategies, network modularity improves cooperation in the anti-coordination Snowdrift game and the Prisoner’s Dilemma game but not in the Stag Hunt coordination game. We also confirm the negative effect of the scale-free graph on cooperation when effective payoffs are used. On the flip side, clique graphs are highly cooperative across social environments. Adding cycles to the acyclic scale-free graph increases cooperation when multiple games are considered; however, cycles have the opposite effect on how forgiving agents are when playing the Prisoner’s Dilemma game.
An illustration of new methods in machine condition monitoring, Part I: stochastic resonance
NASA Astrophysics Data System (ADS)
Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.
2017-05-01
There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.
Pineda, Angel R; Barrett, Harrison H
2004-02-01
The current paradigm for evaluating detectors in digital radiography relies on Fourier methods. Fourier methods rely on a shift-invariant and statistically stationary description of the imaging system. The theoretical justification for the use of Fourier methods is based on a uniform background fluence and an infinite detector. In practice, the background fluence is not uniform and detector size is finite. We study the effect of stochastic blurring and structured backgrounds on the correlation between Fourier-based figures of merit and Hotelling detectability. A stochastic model of the blurring leads to behavior similar to what is observed by adding electronic noise to the deterministic blurring model. Background structure does away with the shift invariance. Anatomical variation makes the covariance matrix of the data less amenable to Fourier methods by introducing long-range correlations. It is desirable to have figures of merit that can account for all the sources of variation, some of which are not stationary. For such cases, we show that the commonly used figures of merit based on the discrete Fourier transform can provide an inaccurate estimate of Hotelling detectability.
Martin, Justin W.; Cushman, Fiery
2015-01-01
When a cooperative partner defects, at least two types of response are available: Punishment, aimed at modifying behavior, and ostracism, aimed at avoiding further social interaction with the partner. These options, termed partner control and partner choice, have been distinguished at behavioral and evolutionary levels. However, little work has compared their cognitive bases. Do these disparate behaviors depend on common processes of moral evaluation? Specifically, we assess whether they show identical patterns of dependence on two key dimensions of moral evaluation: A person’s intentions, and the outcomes that they cause. We address this issue in a “trembling hand” economic game. In this game, an allocator divides a monetary stake between themselves and a responder based on a stochastic mechanism. This allows for dissociations between the allocator’s intent and the actual outcome. Responders were either given the opportunity to punish or reward the allocator (partner control) or to switch to a different partner for a subsequent round of play (partner choice). Our results suggest that partner control and partner choice behaviors are supported by distinct underlying cognitive processes: Partner control exhibits greater sensitivity to the outcomes a partner causes, while partner choice is influenced almost exclusively by a partner’s intentions. This cognitive dissociation can be understood in light of the unique adaptive functions of partner control and partner choice. PMID:25915550
NASA Astrophysics Data System (ADS)
Li, Zhi; Li, Chunhui; Wang, Xuan; Peng, Cong; Cai, Yanpeng; Huang, Weichen
2018-01-01
Problems with water resources restrict the sustainable development of a city with water shortages. Based on system dynamics (SD) theory, a model of sustainable utilization of water resources using the STELLA software has been established. This model consists of four subsystems: population system, economic system, water supply system and water demand system. The boundaries of the four subsystems are vague, but they are closely related and interdependent. The model is applied to Zhengzhou City, China, which has a serious water shortage. The difference between the water supply and demand is very prominent in Zhengzhou City. The model was verified with data from 2009 to 2013. The results show that water demand of Zhengzhou City will reach 2.57 billion m3 in 2020. A water resources optimization model is developed based on interval-parameter two-stage stochastic programming. The objective of the model is to allocate water resources to each water sector and make the lowest cost under the minimum water demand. Using the simulation results, decision makers can easily weigh the costs of the system, the water allocation objectives, and the system risk. The hybrid system dynamics method and optimization model is a rational try to support water resources management in many cities, particularly for cities with potential water shortage and it is solidly supported with previous studies and collected data.
C4I Community of Interest C2 Roadmap
2015-03-24
QoS -based services – Digital policy-based prioritization – Dynamic bandwidth allocation – Automated network management April 15 Slide 9...Co-Site Mitigation) NC-3 • LPD/LPI Comms NC-4 • Increased Range NC-7 • Increased Loss Tolerance & Recovery NC-7 • Mobile Ad Hoc Networking NC-8...Algorithms and Software • Systems and Processes Networks and Communications • Radios and Apertures • Networks • Information April 15 Slide 8
ROSA: Distributed Joint Routing and Dynamic Spectrum Allocation in Cognitive Radio Ad Hoc Networks
2010-03-01
Aug. 1999. [20] I. N. Psaromiligkos and S. N. Batalama. Rapid Combined Synchronization/Demodulation Structures for DS - CDMA Systems - Part II: Finite...Medley. Rapid Combined Synchronization/Demodulation Structures for DS - CDMA Systems - Part I: Algorithmic developments. IEEE Transactions on...multiple access ( CDMA ) [21][20] al- low concurrent co-located communications so that a message from node i to node j can be correctly received even if
Transforming the Air Traffic Management System -- Why Is It So Hard?
2012-11-08
Aircraft Systems Integration The Equity Concept Chocolate Cake Problem: How can I distribute this cake equitably among each of the students sitting...net-centric system. – Timely, common information will be available to all (humans and machines ) to help them make their decisions. – While any change...prioritization done when scarce resources must be allocated? (Remember how hard it was to distribute the chocolate cake!) ADS-B In-Trail Procedures
Exploiting vibrational resonance in weak-signal detection
NASA Astrophysics Data System (ADS)
Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek
2017-08-01
In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.
Exploiting vibrational resonance in weak-signal detection.
Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek
2017-08-01
In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.
Double diffusivity model under stochastic forcing
NASA Astrophysics Data System (ADS)
Chattopadhyay, Amit K.; Aifantis, Elias C.
2017-05-01
The "double diffusivity" model was proposed in the late 1970s, and reworked in the early 1980s, as a continuum counterpart to existing discrete models of diffusion corresponding to high diffusivity paths, such as grain boundaries and dislocation lines. It was later rejuvenated in the 1990s to interpret experimental results on diffusion in polycrystalline and nanocrystalline specimens where grain boundaries and triple grain boundary junctions act as high diffusivity paths. Technically, the model pans out as a system of coupled Fick-type diffusion equations to represent "regular" and "high" diffusivity paths with "source terms" accounting for the mass exchange between the two paths. The model remit was extended by analogy to describe flow in porous media with double porosity, as well as to model heat conduction in media with two nonequilibrium local temperature baths, e.g., ion and electron baths. Uncoupling of the two partial differential equations leads to a higher-ordered diffusion equation, solutions of which could be obtained in terms of classical diffusion equation solutions. Similar equations could also be derived within an "internal length" gradient (ILG) mechanics formulation applied to diffusion problems, i.e., by introducing nonlocal effects, together with inertia and viscosity, in a mechanics based formulation of diffusion theory. While being remarkably successful in studies related to various aspects of transport in inhomogeneous media with deterministic microstructures and nanostructures, its implications in the presence of stochasticity have not yet been considered. This issue becomes particularly important in the case of diffusion in nanopolycrystals whose deterministic ILG-based theoretical calculations predict a relaxation time that is only about one-tenth of the actual experimentally verified time scale. This article provides the "missing link" in this estimation by adding a vital element in the ILG structure, that of stochasticity, that takes into account all boundary layer fluctuations. Our stochastic-ILG diffusion calculation confirms rapprochement between theory and experiment, thereby benchmarking a new generation of gradient-based continuum models that conform closer to real-life fluctuating environments.
Inferring microbial interaction networks from metagenomic data using SgLV-EKF algorithm.
Alshawaqfeh, Mustafa; Serpedin, Erchin; Younes, Ahmad Bani
2017-03-27
Inferring the microbial interaction networks (MINs) and modeling their dynamics are critical in understanding the mechanisms of the bacterial ecosystem and designing antibiotic and/or probiotic therapies. Recently, several approaches were proposed to infer MINs using the generalized Lotka-Volterra (gLV) model. Main drawbacks of these models include the fact that these models only consider the measurement noise without taking into consideration the uncertainties in the underlying dynamics. Furthermore, inferring the MIN is characterized by the limited number of observations and nonlinearity in the regulatory mechanisms. Therefore, novel estimation techniques are needed to address these challenges. This work proposes SgLV-EKF: a stochastic gLV model that adopts the extended Kalman filter (EKF) algorithm to model the MIN dynamics. In particular, SgLV-EKF employs a stochastic modeling of the MIN by adding a noise term to the dynamical model to compensate for modeling uncertainties. This stochastic modeling is more realistic than the conventional gLV model which assumes that the MIN dynamics are perfectly governed by the gLV equations. After specifying the stochastic model structure, we propose the EKF to estimate the MIN. SgLV-EKF was compared with two similarity-based algorithms, one algorithm from the integral-based family and two regression-based algorithms, in terms of the achieved performance on two synthetic data-sets and two real data-sets. The first data-set models the randomness in measurement data, whereas, the second data-set incorporates uncertainties in the underlying dynamics. The real data-sets are provided by a recent study pertaining to an antibiotic-mediated Clostridium difficile infection. The experimental results demonstrate that SgLV-EKF outperforms the alternative methods in terms of robustness to measurement noise, modeling errors, and tracking the dynamics of the MIN. Performance analysis demonstrates that the proposed SgLV-EKF algorithm represents a powerful and reliable tool to infer MINs and track their dynamics.
Johnstone, Stuart J; Roodenrys, Steven J; Johnson, Kirsten; Bonfield, Rebecca; Bennett, Susan J
2017-06-01
Previous studies report reductions in symptom severity after combined working memory (WM) and inhibitory control (IC) training in children with AD/HD. Based on theoretical accounts of the role of arousal/attention modulation problems in AD/HD, the current study examined the efficacy of combined WM, IC, and neurofeedback training in children with AD/HD and subclinical AD/HD. Using a randomized waitlist control design, 85 children were randomly allocated to a training or waitlist condition and completed pre- and post-training assessments of overt behavior, trained and untrained cognitive task performance, and resting and task-related EEG activity. The training group completed twenty-five sessions of training using Focus Pocus software at home over a 7 to 8-week period. Trainees improved at the trained tasks, while enjoyment and engagement declined across sessions. After training, AD/HD symptom severity was reduced in the AD/HD and subclinical groups according to parents, and in the former group only according to blinded teachers and significant-others. There were minor improvements in two of six near-transfer tasks, and evidence of far-transfer of training effects in four of five far-transfer tasks. Frontal region changes indicated normalization of atypical EEG features with reduced delta and increased alpha activity. It is concluded that technology developments provide an interesting a vehicle for delivering interventions and that, while further research is needed, combined WM, IC, and neurofeedback training can reduce AD/HD symptom severity in children with AD/HD and may also be beneficial to children with subclinical AD/HD. Copyright © 2017 Elsevier B.V. All rights reserved.
Cost drivers and resource allocation in military health care systems.
Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R
2007-03-01
This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.
Some Work and Some Play: Microscopic and Macroscopic Approaches to Labor and Leisure
Niyogi, Ritwik K.; Shizgal, Peter; Dayan, Peter
2014-01-01
Given the option, humans and other animals elect to distribute their time between work and leisure, rather than choosing all of one and none of the other. Traditional accounts of partial allocation have characterised behavior on a macroscopic timescale, reporting and studying the mean times spent in work or leisure. However, averaging over the more microscopic processes that govern choices is known to pose tricky theoretical problems, and also eschews any possibility of direct contact with the neural computations involved. We develop a microscopic framework, formalized as a semi-Markov decision process with possibly stochastic choices, in which subjects approximately maximise their expected returns by making momentary commitments to one or other activity. We show macroscopic utilities that arise from microscopic ones, and demonstrate how facets such as imperfect substitutability can arise in a more straightforward microscopic manner. PMID:25474151
Reservoirs performances under climate variability: a case study
NASA Astrophysics Data System (ADS)
Longobardi, A.; Mautone, M.; de Luca, C.
2014-09-01
A case study, the Piano della Rocca dam (southern Italy) is discussed here in order to quantify the system performances under climate variability conditions. Different climate scenarios have been stochastically generated according to the tendencies in precipitation and air temperature observed during recent decades for the studied area. Climate variables have then been filtered through an ARMA model to generate, at the monthly scale, time series of reservoir inflow volumes. Controlled release has been computed considering the reservoir is operated following the standard linear operating policy (SLOP) and reservoir performances have been assessed through the calculation of reliability, resilience and vulnerability indices (Hashimoto et al. 1982), comparing current and future scenarios of climate variability. The proposed approach can be suggested as a valuable tool to mitigate the effects of moderate to severe and persistent droughts periods, through the allocation of new water resources or the planning of appropriate operational rules.
Plug-in hybrid electric vehicles in smart grid
NASA Astrophysics Data System (ADS)
Yao, Yin
In this thesis, in order to investigate the impact of charging load from plug-in hybrid electric vehicles (PHEVs), a stochastic model is developed in Matlab. In this model, two main types of PHEVs are defined: public transportation vehicles and private vehicles. Different charging time schedule, charging speed and battery capacity are considered for each type of vehicles. The simulation results reveal that there will be two load peaks (at noon and in evening) when the penetration level of PHEVs increases continuously to 30% in 2030. Therefore, optimization tool is utilized to shift load peaks. This optimization process is based on real time pricing and wind power output data. With the help of smart grid, power allocated to each vehicle could be controlled. As a result, this optimization could fulfill the goal of shifting load peaks to valley areas where real time price is low or wind output is high.
Sohl, Terry L.; Sayler, Kristi L.; Drummond, Mark A.; Loveland, Thomas R.
2007-01-01
A wide variety of ecological applications require spatially explicit, historic, current, and projected land use and land cover data. The U.S. Land Cover Trends project is analyzing contemporary (1973–2000) land-cover change in the conterminous United States. The newly developed FORE-SCE model used Land Cover Trends data and theoretical, statistical, and deterministic modeling techniques to project future land cover change through 2020 for multiple plausible scenarios. Projected proportions of future land use were initially developed, and then sited on the lands with the highest potential for supporting that land use and land cover using a statistically based stochastic allocation procedure. Three scenarios of 2020 land cover were mapped for the western Great Plains in the US. The model provided realistic, high-resolution, scenario-based land-cover products suitable for multiple applications, including studies of climate and weather variability, carbon dynamics, and regional hydrology.
A multi-assets artificial stock market with zero-intelligence traders
NASA Astrophysics Data System (ADS)
Ponta, L.; Raberto, M.; Cincotti, S.
2011-01-01
In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.
Accommodating the ecological fallacy in disease mapping in the absence of individual exposures.
Wang, Feifei; Wang, Jian; Gelfand, Alan; Li, Fan
2017-12-30
In health exposure modeling, in particular, disease mapping, the ecological fallacy arises because the relationship between aggregated disease incidence on areal units and average exposure on those units differs from the relationship between the event of individual incidence and the associated individual exposure. This article presents a novel modeling approach to address the ecological fallacy in the least informative data setting. We assume the known population at risk with an observed incidence for a collection of areal units and, separately, environmental exposure recorded during the period of incidence at a collection of monitoring stations. We do not assume any partial individual level information or random allocation of individuals to observed exposures. We specify a conceptual incidence surface over the study region as a function of an exposure surface resulting in a stochastic integral of the block average disease incidence. The true block level incidence is an unavailable Monte Carlo integration for this stochastic integral. We propose an alternative manageable Monte Carlo integration for the integral. Modeling in this setting is immediately hierarchical, and we fit our model within a Bayesian framework. To alleviate the resulting computational burden, we offer 2 strategies for efficient model fitting: one is through modularization, the other is through sparse or dimension-reduced Gaussian processes. We illustrate the performance of our model with simulations based on a heat-related mortality dataset in Ohio and then analyze associated real data. Copyright © 2017 John Wiley & Sons, Ltd.
Gao, Xueping; Liu, Yinzhu; Sun, Bowen
2018-06-05
The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.
Accidental Outcomes Guide Punishment in a “Trembling Hand” Game
Cushman, Fiery; Dreber, Anna; Wang, Ying; Costa, Jay
2009-01-01
How do people respond to others' accidental behaviors? Reward and punishment for an accident might depend on the actor's intentions, or instead on the unintended outcomes she brings about. Yet, existing paradigms in experimental economics do not include the possibility of accidental monetary allocations. We explore the balance of outcomes and intentions in a two-player economic game where monetary allocations are made with a “trembling hand”: that is, intentions and outcomes are sometimes mismatched. Player 1 allocates $10 between herself and Player 2 by rolling one of three dice. One die has a high probability of a selfish outcome, another has a high probability of a fair outcome, and the third has a high probability of a generous outcome. Based on Player 1's choice of die, Player 2 can infer her intentions. However, any of the three die can yield any of the three possible outcomes. Player 2 is given the opportunity to respond to Player 1's allocation by adding to or subtracting from Player 1's payoff. We find that Player 2's responses are influenced substantially by the accidental outcome of Player 1's roll of the die. Comparison to control conditions suggests that in contexts where the allocation is at least partially under the control of Player 1, Player 2 will punish Player 1 accountable for unintentional negative outcomes. In addition, Player 2's responses are influenced by Player 1's intention. However, Player 2 tends to modulate his responses substantially more for selfish intentions than for generous intentions. This novel economic game provides new insight into the psychological mechanisms underlying social preferences for fairness and retribution. PMID:19707578
Allocation of spectral and spatial modes in multidimensional metro-access optical networks
NASA Astrophysics Data System (ADS)
Gao, Wenbo; Cvijetic, Milorad
2018-04-01
Introduction of spatial division multiplexing (SDM) has added a new dimension in an effort to increase optical fiber channel capacity. At the same time, it can also be explored as an advanced optical networking tool. In this paper, we have investigated the resource allocation to end-users in multidimensional networking structure with plurality of spectral and spatial modes actively deployed in different networking segments. This presents a more comprehensive method as compared to the common practice where the segments of optical network are analyzed independently since the interaction between network hierarchies is included into consideration. We explored the possible transparency from the metro/core network to the optical access network, analyzed the potential bottlenecks from the network architecture perspective, and identified an optimized network structure. In our considerations, the viability of optical grooming through the entire hierarchical all-optical network is investigated by evaluating the effective utilization and spectral efficiency of the network architecture.
Cross-Layer Resource Allocation for Wireless Visual Sensor Networks and Mobile Ad Hoc Networks
2014-10-01
MMD), minimizes the maximum dis- tortion among all nodes of the network, promoting a rather unbiased treatment of the nodes. We employed the Particle...achieve the ideal tradeoff between the transmitted video quality and energy consumption. Each sensor node has a bit rate that can be used for both...Distortion (MMD), minimizes the maximum distortion among all nodes of the network, promoting a rather unbiased treatment of the nodes. For both criteria
Campanella, N
1999-01-01
A three month relief operation for the 25,303 people living in the municipal area of Villanueva, Nicaragua, hit by Hurricane Mitch, was carried out jointly by the staff of an international non-government organization and an Italian Regional Hospital's staff. Health Mobile Teams joined the local health facilities (Health Centers and Health Posts) in responding to the people's urgent health problems. From their files the thirty-day post-disaster incidence of acute diarrheas (AD), respiratory tract infectious diseases (ARD), and malaria were estimated and compared with off-crisis data. New cases of leptospirosis were searched, but no control group was available. The incidence of AD and ARD increased significantly in comparison with pre-disaster data (6,798 vs. 2,849 per 100,000 inhabitants (p < 0.01) and 1,205 vs. 295 per 100,000 inhabitants (p < 0.01)). The increase in incidence of malaria was not explicit. Only three cases of leptospirosis were ascertained. The relief operators used the gathered data to make decisions to allocate the poorly available resources. The feasibility of the infectious disease surveillance and the reliability of the results under such conditions may change according to the setting. In this case study the infectious disease surveillance was feasible, and the gathered data were reliable and of some help to the relief operators in order to allocate the resources efficiently.
The What and Where of Adding Channel Noise to the Hodgkin-Huxley Equations
Goldwyn, Joshua H.; Shea-Brown, Eric
2011-01-01
Conductance-based equations for electrically active cells form one of the most widely studied mathematical frameworks in computational biology. This framework, as expressed through a set of differential equations by Hodgkin and Huxley, synthesizes the impact of ionic currents on a cell's voltage—and the highly nonlinear impact of that voltage back on the currents themselves—into the rapid push and pull of the action potential. Later studies confirmed that these cellular dynamics are orchestrated by individual ion channels, whose conformational changes regulate the conductance of each ionic current. Thus, kinetic equations familiar from physical chemistry are the natural setting for describing conductances; for small-to-moderate numbers of channels, these will predict fluctuations in conductances and stochasticity in the resulting action potentials. At first glance, the kinetic equations provide a far more complex (and higher-dimensional) description than the original Hodgkin-Huxley equations or their counterparts. This has prompted more than a decade of efforts to capture channel fluctuations with noise terms added to the equations of Hodgkin-Huxley type. Many of these approaches, while intuitively appealing, produce quantitative errors when compared to kinetic equations; others, as only very recently demonstrated, are both accurate and relatively simple. We review what works, what doesn't, and why, seeking to build a bridge to well-established results for the deterministic equations of Hodgkin-Huxley type as well as to more modern models of ion channel dynamics. As such, we hope that this review will speed emerging studies of how channel noise modulates electrophysiological dynamics and function. We supply user-friendly MATLAB simulation code of these stochastic versions of the Hodgkin-Huxley equations on the ModelDB website (accession number 138950) and http://www.amath.washington.edu/~etsb/tutorials.html. PMID:22125479
The bioethics of separating conjoined twins in plastic surgery.
Lee, Michelle; Gosain, Arun K; Becker, Devra
2011-10-01
The incidence of craniopagus twins approximates four to six per 10 million births. Although rare, surgical separation of conjoined twins poses significant technical and ethical challenges. The present report uses the case of craniopagus twins AD and TD to examine the bioethical issues faced by a multidisciplinary medical team in planning the separation of craniopagus twins. AD and TD are craniopagus twins conjoined at the head. TD's head is conjoined to the back of AD's head. Neurologically, AD has the dominant cerebral circulation. TD has two normal kidneys, whereas AD has none. AD depends on TD's renal function and, on separation, will require either a kidney transplant or lifelong dialysis. This case report reviews one approach to analyzing and solving complex ethical dilemmas in pediatric plastic surgery. The principles reviewed are (1) autonomy and informed consent, focusing especially on the role of children in the informed consent process; (2) beneficence and nonmaleficence, two intricately intertwined principles because separation could potentially cause irreversible harm to one twin while improving the quality of life for the other (as separation is not a life-saving procedure, is it ethical to perform a procedure with unknown surgical risk to improve children's quality of life?); and (3) justice (is it fair to allocate excessive medical resources for the twins' separation?). The present report explores the ethics behind such decisions with respect to the separation of conjoined twins.
Modeling ecological traps for the control of feral pigs
Dexter, Nick; McLeod, Steven R
2015-01-01
Ecological traps are habitat sinks that are preferred by dispersing animals but have higher mortality or reduced fecundity compared to source habitats. Theory suggests that if mortality rates are sufficiently high, then ecological traps can result in extinction. An ecological trap may be created when pest animals are controlled in one area, but not in another area of equal habitat quality, and when there is density-dependent immigration from the high-density uncontrolled area to the low-density controlled area. We used a logistic population model to explore how varying the proportion of habitat controlled, control mortality rate, and strength of density-dependent immigration for feral pigs could affect the long-term population abundance and time to extinction. Increasing control mortality, the proportion of habitat controlled and the strength of density-dependent immigration decreased abundance both within and outside the area controlled. At higher levels of these parameters, extinction was achieved for feral pigs. We extended the analysis with a more complex stochastic, interactive model of feral pig dynamics in the Australian rangelands to examine how the same variables as the logistic model affected long-term abundance in the controlled and uncontrolled area and time to extinction. Compared to the logistic model of feral pig dynamics, the stochastic interactive model predicted lower abundances and extinction at lower control mortalities and proportions of habitat controlled. To improve the realism of the stochastic interactive model, we substituted fixed mortality rates with a density-dependent control mortality function, empirically derived from helicopter shooting exercises in Australia. Compared to the stochastic interactive model with fixed mortality rates, the model with the density-dependent control mortality function did not predict as substantial decline in abundance in controlled or uncontrolled areas or extinction for any combination of variables. These models demonstrate that pest eradication is theoretically possible without the pest being controlled throughout its range because of density-dependent immigration into the area controlled. The stronger the density-dependent immigration, the better the overall control in controlled and uncontrolled habitat combined. However, the stronger the density-dependent immigration, the poorer the control in the area controlled. For feral pigs, incorporating environmental stochasticity improves the prospects for eradication, but adding a realistic density-dependent control function eliminates these prospects. PMID:26045954
NASA Astrophysics Data System (ADS)
Sembiring, M. T.; Wahyuni, D.; Sinaga, T. S.; Silaban, A.
2018-02-01
Cost allocation at manufacturing industry particularly in Palm Oil Mill still widely practiced based on estimation. It leads to cost distortion. Besides, processing time determined by company is not in accordance with actual processing time in work station. Hence, the purpose of this study is to eliminates non-value-added activities therefore processing time could be shortened and production cost could be reduced. Activity Based Costing Method is used in this research to calculate production cost with Value Added and Non-Value-Added Activities consideration. The result of this study is processing time decreasing for 35.75% at Weighting Bridge Station, 29.77% at Sorting Station, 5.05% at Loading Ramp Station, and 0.79% at Sterilizer Station. Cost of Manufactured for Crude Palm Oil are IDR 5.236,81/kg calculated by Traditional Method, IDR 4.583,37/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.581,71/kg after implementation of Activity Improvement Meanwhile Cost of Manufactured for Palm Kernel are IDR 2.159,50/kg calculated by Traditional Method, IDR 4.584,63/kg calculated by Activity Based Costing Method before implementation of Activity Improvement and IDR 4.582,97/kg after implementation of Activity Improvement.
Tchalla, Achille E; Lachal, Florent; Cardinaud, Noëlle; Saulnier, Isabelle; Rialle, Vincent; Preux, Pierre-Marie; Dantoine, Thierry
2013-01-01
Alzheimer's disease (AD) is known to increase the risk of falls. We aim to determine the effectiveness of home-based technologies coupled with teleassistance service (HBTec-TS) in older people with AD. A study of falls and the HBTec-TS system (with a light path combined with a teleassistance service) was conducted in the community. The 96 subjects, drawn from a random population of frail elderly people registered as receiving an allocation for lost autonomy from the county, were aged 65 or more and had mild-to-moderate AD with 1 year of follow-up; 49 were in the intervention group and 47 in the control group. A total of 16 (32.7%) elderly people fell in the group with HBTec-TS versus 30 (63.8%) in the group without HBTec-TS. The use of HBTec-TS was significantly associated with a reduction in the number of indoor falls among elderly people with mild-to-moderate AD (OR = 0.37, 95% CI = 0.15-0.88, p = 0.0245). The use of the HBTec-TS significantly reduced the incidence of primary indoor falling needing GP intervention or attendance at an emergency room among elderly people with AD and mild-to-moderate dementia. © 2013 S. Karger AG, Basel.
Pedroso, Renata Valle; Coelho, Flávia Gomes de Melo; Santos-Galduróz, Ruth Ferreira; Costa, José Luiz Riani; Gobbi, Sebastião; Stella, Florindo
2012-01-01
Elderly individuals with AD are more susceptible to falls, which might be associated with decrements in their executive functions and balance, among other things. We aimed to analyze the effects of a program of dual task physical activity on falls, executive functions and balance of elderly individuals with AD. We studied 21 elderly with probable AD, allocated to two groups: the training group (TG), with 10 elderly who participated in a program of dual task physical activity; and the control group (CG), with 11 elderly who were not engaged in regular practice of physical activity. The Clock Drawing Test (CDT) and the Frontal Assessment Battery (FAB) were used in the assessment of the executive functions, while the Berg Balance Scale (BBS) and the Timed Up-and-Go (TUG)-test evaluated balance. The number of falls was obtained by means of a questionnaire. We observed a better performance of the TG as regards balance and executive functions. Moreover, the lower the number of steps in the TUG scale, the higher the scores in the CDT, and in the FAB. The practice of regular physical activity with dual task seems to have contributed to the maintenance and improvement of the motor and cognitive functions of the elderly with AD. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Contribution of tropical instability waves to ENSO irregularity
NASA Astrophysics Data System (ADS)
Holmes, Ryan M.; McGregor, Shayne; Santoso, Agus; England, Matthew H.
2018-05-01
Tropical instability waves (TIWs) are a major source of internally-generated oceanic variability in the equatorial Pacific Ocean. These non-linear phenomena play an important role in the sea surface temperature (SST) budget in a region critical for low-frequency modes of variability such as the El Niño-Southern Oscillation (ENSO). However, the direct contribution of TIW-driven stochastic variability to ENSO has received little attention. Here, we investigate the influence of TIWs on ENSO using a 1/4° ocean model coupled to a simple atmosphere. The use of a simple atmosphere removes complex intrinsic atmospheric variability while allowing the dominant mode of air-sea coupling to be represented as a statistical relationship between SST and wind stress anomalies. Using this hybrid coupled model, we perform a suite of coupled ensemble forecast experiments initiated with wind bursts in the western Pacific, where individual ensemble members differ only due to internal oceanic variability. We find that TIWs can induce a spread in the forecast amplitude of the Niño 3 SST anomaly 6-months after a given sequence of WWBs of approximately ± 45% the size of the ensemble mean anomaly. Further, when various estimates of stochastic atmospheric forcing are added, oceanic internal variability is found to contribute between about 20% and 70% of the ensemble forecast spread, with the remainder attributable to the atmospheric variability. While the oceanic contribution to ENSO stochastic forcing requires further quantification beyond the idealized approach used here, our results nevertheless suggest that TIWs may impact ENSO irregularity and predictability. This has implications for ENSO representation in low-resolution coupled models.
Xu, Yifang; Collins, Leslie M
2007-08-01
Two approaches have been proposed to reduce the synchrony of the neural response to electrical stimuli in cochlear implants. One approach involves adding noise to the pulse-train stimulus, and the other is based on using a high-rate pulse-train carrier. Hypotheses regarding the efficacy of the two approaches can be tested using computational models of neural responsiveness prior to time-intensive psychophysical studies. In our previous work, we have used such models to examine the effects of noise on several psychophysical measures important to speech recognition. However, to date there has been no parallel analytic solution investigating the neural response to the high-rate pulse-train stimuli and their effect on psychophysical measures. This work investigates the properties of the neural response to high-rate pulse-train stimuli with amplitude modulated envelopes using a stochastic auditory nerve model. The statistics governing the neural response to each pulse are derived using a recursive method. The agreement between the theoretical predictions and model simulations is demonstrated for sinusoidal amplitude modulated (SAM) high rate pulse-train stimuli. With our approach, predicting the neural response in modern implant devices becomes tractable. Psychophysical measurements are also predicted using the stochastic auditory nerve model for SAM high-rate pulse-train stimuli. Changes in dynamic range (DR) and intensity discrimination are compared with that observed for noise-modulated pulse-train stimuli. Modulation frequency discrimination is also studied as a function of stimulus level and pulse rate. Results suggest that high rate carriers may positively impact such psychophysical measures.
Dixon, Jennifer; Smith, Peter; Gravelle, Hugh; Martin, Steve; Bardsley, Martin; Rice, Nigel; Georghiou, Theo; Dusheiko, Mark; Billings, John; Lorenzo, Michael De; Sanderson, Colin
2011-11-22
To develop a formula for allocating resources for commissioning hospital care to all general practices in England based on the health needs of the people registered in each practice Multivariate prospective statistical models were developed in which routinely collected electronic information from 2005-6 and 2006-7 on individuals and the areas in which they lived was used to predict their costs of hospital care in the next year, 2007-8. Data on individuals included all diagnoses recorded at any inpatient admission. Models were developed on a random sample of 5 million people and validated on a second random sample of 5 million people and a third sample of 5 million people drawn from a random sample of practices. All general practices in England as of 1 April 2007. All NHS inpatient admissions and outpatient attendances for individuals registered with a general practice on that date. All individuals registered with a general practice in England at 1 April 2007. Power of the statistical models to predict the costs of the individual patient or each practice's registered population for 2007-8 tested with a range of metrics (R(2) reported here). Comparisons of predicted costs in 2007-8 with actual costs incurred in the same year were calculated by individual and by practice. Models including person level information (age, sex, and ICD-10 codes diagnostic recorded) and a range of area level information (such as socioeconomic deprivation and supply of health facilities) were most predictive of costs. After accounting for person level variables, area level variables added little explanatory power. The best models for resource allocation could predict upwards of 77% of the variation in costs at practice level, and about 12% at the person level. With these models, the predicted costs of about a third of practices would exceed or undershoot the actual costs by 10% or more. Smaller practices were more likely to be in these groups. A model was developed that performed well by international standards, and could be used for allocations to practices for commissioning. The best formulas, however, could predict only about 12% of the variation in next year's costs of most inpatient and outpatient NHS care for each individual. Person-based diagnostic data significantly added to the predictive power of the models.
Spatio-Temporal History of HIV-1 CRF35_AD in Afghanistan and Iran.
Eybpoosh, Sana; Bahrampour, Abbas; Karamouzian, Mohammad; Azadmanesh, Kayhan; Jahanbakhsh, Fatemeh; Mostafavi, Ehsan; Zolala, Farzaneh; Haghdoost, Ali Akbar
2016-01-01
HIV-1 Circulating Recombinant Form 35_AD (CRF35_AD) has an important position in the epidemiological profile of Afghanistan and Iran. Despite the presence of this clade in Afghanistan and Iran for over a decade, our understanding of its origin and dissemination patterns is limited. In this study, we performed a Bayesian phylogeographic analysis to reconstruct the spatio-temporal dispersion pattern of this clade using eligible CRF35_AD gag and pol sequences available in the Los Alamos HIV database (432 sequences available from Iran, 16 sequences available from Afghanistan, and a single CRF35_AD-like pol sequence available from USA). Bayesian Markov Chain Monte Carlo algorithm was implemented in BEAST v1.8.1. Between-country dispersion rates were tested with Bayesian stochastic search variable selection method and were considered significant where Bayes factor values were greater than three. The findings suggested that CRF35_AD sequences were genetically similar to parental sequences from Kenya and Uganda, and to a set of subtype A1 sequences available from Afghan refugees living in Pakistan. Our results also showed that across all phylogenies, Afghan and Iranian CRF35_AD sequences formed a monophyletic cluster (posterior clade credibility> 0.7). The divergence date of this cluster was estimated to be between 1990 and 1992. Within this cluster, a bidirectional dispersion of the virus was observed across Afghanistan and Iran. We could not clearly identify if Afghanistan or Iran first established or received this epidemic, as the root location of this cluster could not be robustly estimated. Three CRF35_AD sequences from Afghan refugees living in Pakistan nested among Afghan and Iranian CRF35_AD branches. However, the CRF35_AD-like sequence available from USA diverged independently from Kenyan subtype A1 sequences, suggesting it not to be a true CRF35_AD lineage. Potential factors contributing to viral exchange between Afghanistan and Iran could be injection drug networks and mass migration of Afghan refugees and labours to Iran, which calls for extensive preventive efforts.
Spatio-Temporal History of HIV-1 CRF35_AD in Afghanistan and Iran
Eybpoosh, Sana; Bahrampour, Abbas; Karamouzian, Mohammad; Azadmanesh, Kayhan; Jahanbakhsh, Fatemeh; Mostafavi, Ehsan; Zolala, Farzaneh; Haghdoost, Ali Akbar
2016-01-01
HIV-1 Circulating Recombinant Form 35_AD (CRF35_AD) has an important position in the epidemiological profile of Afghanistan and Iran. Despite the presence of this clade in Afghanistan and Iran for over a decade, our understanding of its origin and dissemination patterns is limited. In this study, we performed a Bayesian phylogeographic analysis to reconstruct the spatio-temporal dispersion pattern of this clade using eligible CRF35_AD gag and pol sequences available in the Los Alamos HIV database (432 sequences available from Iran, 16 sequences available from Afghanistan, and a single CRF35_AD-like pol sequence available from USA). Bayesian Markov Chain Monte Carlo algorithm was implemented in BEAST v1.8.1. Between-country dispersion rates were tested with Bayesian stochastic search variable selection method and were considered significant where Bayes factor values were greater than three. The findings suggested that CRF35_AD sequences were genetically similar to parental sequences from Kenya and Uganda, and to a set of subtype A1 sequences available from Afghan refugees living in Pakistan. Our results also showed that across all phylogenies, Afghan and Iranian CRF35_AD sequences formed a monophyletic cluster (posterior clade credibility> 0.7). The divergence date of this cluster was estimated to be between 1990 and 1992. Within this cluster, a bidirectional dispersion of the virus was observed across Afghanistan and Iran. We could not clearly identify if Afghanistan or Iran first established or received this epidemic, as the root location of this cluster could not be robustly estimated. Three CRF35_AD sequences from Afghan refugees living in Pakistan nested among Afghan and Iranian CRF35_AD branches. However, the CRF35_AD-like sequence available from USA diverged independently from Kenyan subtype A1 sequences, suggesting it not to be a true CRF35_AD lineage. Potential factors contributing to viral exchange between Afghanistan and Iran could be injection drug networks and mass migration of Afghan refugees and labours to Iran, which calls for extensive preventive efforts. PMID:27280293
Wakefield, Melanie; Durrant, Russil
2006-01-01
Television advertising for nicotine replacement therapy (NRT) and Zyban exposes the entire population, including adolescents, to persuasive messages about these smoking-cessation products. There is a risk that adolescents exposed to the advertising might underestimate addictiveness or perceive an unintended message that it is easy to quit smoking. This is of concern because optimism about quitting is a major predictor of trial and progression to heavier smoking among youths. We randomly allocated 492 youths age 12 to 14 years to one of three viewing conditions in which they viewed either (a) 4 NRT ads, (b) 4 Zyban ads, or (c) 4 ads promoting nonpharmacologic cessation services, such as telephone quitlines. After viewing each ad twice, participants completed a 1-page rating form. After all ads had been viewed, youths completed a questionnaire that measured intentions to smoke in the future, perceived addictiveness of smoking, perceived risks and benefits of smoking, and perceived need for pharmaceutical products and services. There were no differences in the composition of groups by age, gender, or smoking uptake. Youths were more likely to agree that the NRT and Zyban ads, compared with the quitline ads, made it seem easy to quit smoking (p < .001). However, there were no systematic differences between groups in perceived addictiveness of smoking, intentions to smoke, or other outcomes. This study suggests that although ads for NRT and Zyban may create "face value" impressions that it is easier to quit, at least in an experimental context in which exposure to ads for telephone quitlines is equal, such appraisals do not undermine more enduring perceptions about smoking. Field research taking into account the relatively high volume of pharmaceutical cessation product advertising is needed.
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Ponomarev, Artem
2009-01-01
A concern for long-term space travel outside the Earth s magnetic field is the late effects to the central nervous system (CNS) from galactic cosmic ray (GCR) or solar particle events (SPE). Human epidemiology data is severely limited for making CNS risk estimates and it is not clear such effects occur following low LET exposures. We are developing systems biology models based on biological information on specific diseases, and experimental data for proton and heavy ion radiation. A two-hit model of Alzheimer s disease (AD) has been proposed by Zhu et al.(1), which is the framework of our model. Of importance is that over 50% of the US population over the age of 75-y have mild to severe forms of AD. Therefore we recommend that risk assessment for a potential AD risk from space radiation should focus on the projection of an earlier age of onset of AD and the prevention of this possible acceleration through countermeasures. In the two-hit model, oxidative stress and aberrant cell cycle-related abnormalities leading to amyloid-beta plaques and neurofibrillary tangles are necessary and invariant steps in AD. We have formulated a stochastic cell kinetics model of the two-hit AD model. In our model a population of neuronal cells is allowed to undergo renewal through neurogenesis and is susceptible to oxidative stress or cell cycle abnormalities with age-specific accumulation of damage. Baseline rates are fitted to AD population data for specific ages, gender, and for persons with an apolipoprotein 4 allele. We then explore how low LET or heavy ions may increase either of the two-hits or neurogenesis either through persistent oxidative stress, direct mutation, or through changes to the micro-environment, and suggest possible ways to develop accurate quantitative estimates of these processes for predicting AD risks following long-term space travel.
Hospital cost accounting: implementing the system successfully.
Burik, D; Duvall, T J
1985-05-01
To successfully implement a cost accounting system, certain key steps should be undertaken. These steps include developing and installing software; developing cost center budgets and inter-cost center allocations; developing service item standard costs; generating cost center level and patient level standard cost reports and reconciling these costs to actual costs; generating product line profitability reports and reconciling these reports to the financial statements; and providing ad hoc reporting capabilities. By following these steps, potential problems in the implementation process can be anticipated and avoided.
Multi-objective possibilistic model for portfolio selection with transaction cost
NASA Astrophysics Data System (ADS)
Jana, P.; Roy, T. K.; Mazumder, S. K.
2009-06-01
In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.
NASA Astrophysics Data System (ADS)
Morales, Marco A.; Fernández-Cervantes, Irving; Agustín-Serrano, Ricardo; Anzo, Andrés; Sampedro, Mercedes P.
2016-08-01
A functional with interactions short-range and long-range low coarse-grained approximation is proposed. This functional satisfies models with dissipative dynamics A, B and the stochastic Swift-Hohenberg equation. Furthermore, terms associated with multiplicative noise source are added in these models. These models are solved numerically using the method known as fast Fourier transform. Results of the spatio-temporal dynamic show similarity with respect to patterns behaviour in ferrofluids phases subject to external fields (magnetic, electric and temperature), as well as with the nucleation and growth phenomena present in some solid dissolutions. As a result of the multiplicative noise effect over the dynamic, some microstructures formed by changing solid phase and composed by binary alloys of Pb-Sn, Fe-C and Cu-Ni, as well as a NiAl-Cr(Mo) eutectic composite material. The model A for active-particles with a non-potential term in form of quadratic gradient explain the formation of nanostructured particles of silver phosphate. With these models is shown that the underlying mechanisms in the patterns formation in all these systems depends of: (a) dissipative dynamics; (b) the short-range and long-range interactions and (c) the appropiate combination of quadratic and multiplicative noise terms.
Kernel methods and flexible inference for complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Capobianco, Enrico
2008-07-01
Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.
Hui, Cang; Richardson, David M.; Pyšek, Petr; Le Roux, Johannes J.; Kučera, Tomáš; Jarošík, Vojtěch
2013-01-01
Species gain membership of regional assemblages by passing through multiple ecological and environmental filters. To capture the potential trajectory of structural changes in regional meta-communities driven by biological invasions, one can categorize species pools into assemblages of different residence times. Older assemblages, having passed through more environmental filters, should become more functionally ordered and structured. Here we calculate the level of compartmentalization (modularity) for three different-aged assemblages (neophytes, introduced after 1500 AD; archaeophytes, introduced before 1500 AD, and natives), including 2,054 species of vascular plants in 302 reserves in central Europe. Older assemblages are more compartmentalized than younger ones, with species composition, phylogenetic structure and habitat characteristics of the modules becoming increasingly distinctive. This sheds light on two mechanisms of how alien species are functionally incorporated into regional species pools: the settling-down hypothesis of diminishing stochasticity with residence time, and the niche-mosaic hypothesis of inlaid neutral modules in regional meta-communities. PMID:24045305
NASA Astrophysics Data System (ADS)
Hossler, T. H. H. H.; Caers, J.; Lakshmi, V.; Harris, J. M.
2016-12-01
Changing weather patterns, such as shorter duration of rainfall have made water sourcesunreliable for local farmers in the Nagbo basin located in Northern Ghana. Farmers are thereforestarting to use groundwater as a secondary source (and sometimes primary source) of water fortheir needs. Groundwater will therefore be most likely subject to considerable stress in the nearfuture with longer dry spells and increasing water demand from users with different interests.Strategies must be adopted to optimally allocate water between the various stakeholders in anuncertain environment. Game Theory (GT) provides a framework for analyzing watermanagement in the Nagobo Basin. GT has recently gained attention in analyzing the impact androle of stakeholders in water resources management but the hydrological and hydrogeologicalmodels fail to account for the numerous data sources and leading uncertainties of thehydrogeological cycle. In this work, we describe by means of a synthetic model a situation in theNagobo basin with a 2-players game, considering both cooperation and non-cooperation. Ahydrological model of the basin is built using the different data available (surface and subsurface).We are interested in quantifying the impact of the uncertainty of the model parameters on thegame, affecting both player's strategies and the equilibrium. In particular, the stochastic nature insupply (recharge of the aquifer) and the uncertain nature of the subsurface (externalities) are areaof focus. A sensitivity analysis has been carried out and these results will be presented as well asthe outcome of the different games.
NASA Astrophysics Data System (ADS)
Moulds, S.; Buytaert, W.; Mijic, A.
2015-04-01
Land use change has important consequences for biodiversity and the sustainability of ecosystem services, as well as for global environmental change. Spatially explicit land use change models improve our understanding of the processes driving change and make predictions about the quantity and location of future and past change. Here we present the lulccR package, an object-oriented framework for land use change modelling written in the R programming language. The contribution of the work is to resolve the following limitations associated with the current land use change modelling paradigm: (1) the source code for model implementations is frequently unavailable, severely compromising the reproducibility of scientific results and making it impossible for members of the community to improve or adapt models for their own purposes; (2) ensemble experiments to capture model structural uncertainty are difficult because of fundamental differences between implementations of different models; (3) different aspects of the modelling procedure must be performed in different environments because existing applications usually only perform the spatial allocation of change. The package includes a stochastic ordered allocation procedure as well as an implementation of the widely used CLUE-S algorithm. We demonstrate its functionality by simulating land use change at the Plum Island Ecosystems site, using a dataset included with the package. It is envisaged that lulccR will enable future model development and comparison within an open environment.
Vertical Object Layout and Compression for Fixed Heaps
NASA Astrophysics Data System (ADS)
Titzer, Ben L.; Palsberg, Jens
Research into embedded sensor networks has placed increased focus on the problem of developing reliable and flexible software for microcontroller-class devices. Languages such as nesC [10] and Virgil [20] have brought higher-level programming idioms to this lowest layer of software, thereby adding expressiveness. Both languages are marked by the absence of dynamic memory allocation, which removes the need for a runtime system to manage memory. While nesC offers code modules with statically allocated fields, arrays and structs, Virgil allows the application to allocate and initialize arbitrary objects during compilation, producing a fixed object heap for runtime. This paper explores techniques for compressing fixed object heaps with the goal of reducing the RAM footprint of a program. We explore table-based compression and introduce a novel form of object layout called vertical object layout. We provide experimental results that measure the impact on RAM size, code size, and execution time for a set of Virgil programs. Our results show that compressed vertical layout has better execution time and code size than table-based compression while achieving more than 20% heap reduction on 6 of 12 benchmark programs and 2-17% heap reduction on the remaining 6. We also present a formalization of vertical object layout and prove tight relationships between three styles of object layout.
Two new algorithms to combine kriging with stochastic modelling
NASA Astrophysics Data System (ADS)
Venema, Victor; Lindau, Ralf; Varnai, Tamas; Simmer, Clemens
2010-05-01
Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated driven by such a kriged field. Stochastic modelling aims at reproducing the statistical structure of the data in space and time. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. While stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. This requires the use of so-called constrained stochastic models. Because radiative transfer through clouds is a highly nonlinear process, it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately. In addition, the correlations within the cloud field are important, especially because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. Up to now, however, we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero during successive iterations. A second algorithm, which we call step-wise kriging, pursues the same aim. Each time the kriging algorithm estimates a value, noise is added to it, after which this new point is accounted for in the estimation of all the later points. In this way, the autocorrelation of the step-krigged field is close to that found in the pseudo measurements. The amount of noise is determined by the kriging uncertainty. The algorithms are tested on cloud fields from large eddy simulations (LES). On these clouds, a measurement is simulated. From these pseudo-measurements, we estimated the power spectrum for the surrogates, the semi-variogram for the (stepwise) kriging and the distribution. Furthermore, the pseudo-measurement is kriged. Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the two new types of stochastic clouds. For comparison, also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results show that both algorithms reproduce the structure of the original clouds well, and the minima and maxima are located where the pseudo-measurements see them. The main problem for the quality of the structure and the root mean square error is the amount of data, which is especially very limited in case of just one zenith pointing measurement.
2018-01-01
The cell division rate, size and gene expression programmes change in response to external conditions. These global changes impact on average concentrations of biomolecule and their variability or noise. Gene expression is inherently stochastic, and noise levels of individual proteins depend on synthesis and degradation rates as well as on cell-cycle dynamics. We have modelled stochastic gene expression inside growing and dividing cells to study the effect of division rates on noise in mRNA and protein expression. We use assumptions and parameters relevant to Escherichia coli, for which abundant quantitative data are available. We find that coupling of transcription, but not translation rates to the rate of cell division can result in protein concentration and noise homeostasis across conditions. Interestingly, we find that the increased cell size at fast division rates, observed in E. coli and other unicellular organisms, buffers noise levels even for proteins with decreased expression at faster growth. We then investigate the functional importance of these regulations using gene regulatory networks that exhibit bi-stability and oscillations. We find that network topology affects robustness to changes in division rate in complex and unexpected ways. In particular, a simple model of persistence, based on global physiological feedback, predicts increased proportion of persister cells at slow division rates. Altogether, our study reveals how cell size regulation in response to cell division rate could help controlling gene expression noise. It also highlights that understanding circuits' robustness across growth conditions is key for the effective design of synthetic biological systems. PMID:29657814
NASA Astrophysics Data System (ADS)
Angileri, Silvia Eleonora; Conoscenti, Christian; Hochschild, Volker; Märker, Michael; Rotigliano, Edoardo; Agnesi, Valerio
2016-06-01
Soil erosion by water constitutes a serious problem affecting various countries. In the last few years, a number of studies have adopted statistical approaches for erosion susceptibility zonation. In this study, the Stochastic Gradient Treeboost (SGT) was tested as a multivariate statistical tool for exploring, analyzing and predicting the spatial occurrence of rill-interrill erosion and gully erosion. This technique implements the stochastic gradient boosting algorithm with a tree-based method. The study area is a 9.5 km2 river catchment located in central-northern Sicily (Italy), where water erosion processes are prevalent, and affect the agricultural productivity of local communities. In order to model soil erosion by water, the spatial distribution of landforms due to rill-interrill and gully erosion was mapped and 12 environmental variables were selected as predictors. Four calibration and four validation subsets were obtained by randomly extracting sets of negative cases, both for rill-interrill erosion and gully erosion models. The results of validation, based on receiving operating characteristic (ROC) curves, showed excellent to outstanding accuracies of the models, and thus a high prediction skill. Moreover, SGT allowed us to explore the relationships between erosion landforms and predictors. A different suite of predictor variables was found to be important for the two models. Elevation, aspect, landform classification and land-use are the main controlling factors for rill-interrill erosion, whilst the stream power index, plan curvature and the topographic wetness index were the most important independent variables for gullies. Finally, an ROC plot analysis made it possible to define a threshold value to classify cells according to the presence/absence of the two erosion processes. Hence, by heuristically combining the resulting rill-interrill erosion and gully erosion susceptibility maps, an integrated water erosion susceptibility map was created. The adopted method offers the advantages of an objective and repeatable procedure, whose result is useful for local administrators to identify the areas that are most susceptible to water erosion and best allocate resources for soil conservation strategies.
Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model
NASA Astrophysics Data System (ADS)
Hung, F.; Hobbs, B. F.; McGarity, A. E.
2014-12-01
In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and chance constraints are placed on the objectives to achieve desired confidence levels. By varying the budgets, reliability constraints, and priorities among other objectives, we generate a range of GI deployment strategies that represent tradeoffs among objectives as well as the confidence in achieving them.
Wang, Feng; Kang, Mengzhen; Lu, Qi; Letort, Véronique; Han, Hui; Guo, Yan; de Reffye, Philippe; Li, Baoguo
2011-01-01
Background and Aims Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal species used for windbreak and sand stabilization in arid and semi-arid areas in northern China. A model-assisted analysis of its canopy architectural development and functions is valuable for better understanding its behaviour and roles in fragile ecosystems. However, due to the intrinsic complexity and variability of trees, the parametric identification of such models is currently a major obstacle to their evaluation and their validation with respect to real data. The aim of this paper was to present the mathematical framework of a stochastic functional–structural model (GL2) and its parameterization for Mongolian Scots pines, taking into account inter-plant variability in terms of topological development and biomass partitioning. Methods In GL2, plant organogenesis is determined by the realization of random variables representing the behaviour of axillary or apical buds. The associated probabilities are calibrated for Mongolian Scots pines using experimental data including means and variances of the numbers of organs per plant in each order-based class. The functional part of the model relies on the principles of source–sink regulation and is parameterized by direct observations of living trees and the inversion method using measured data for organ mass and dimensions. Key Results The final calibration accuracy satisfies both organogenetic and morphogenetic processes. Our hypothesis for the number of organs following a binomial distribution is found to be consistent with the real data. Based on the calibrated parameters, stochastic simulations of the growth of Mongolian Scots pines in plantations are generated by the Monte Carlo method, allowing analysis of the inter-individual variability of the number of organs and biomass partitioning. Three-dimensional (3D) architectures of young Mongolian Scots pines were simulated for 4-, 6- and 8-year-old trees. Conclusions This work provides a new method for characterizing tree structures and biomass allocation that can be used to build a 3D virtual Mongolian Scots pine forest. The work paves the way for bridging the gap between a single-plant model and a stand model. PMID:21062760
Wang, Feng; Kang, Mengzhen; Lu, Qi; Letort, Véronique; Han, Hui; Guo, Yan; de Reffye, Philippe; Li, Baoguo
2011-04-01
Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal species used for windbreak and sand stabilization in arid and semi-arid areas in northern China. A model-assisted analysis of its canopy architectural development and functions is valuable for better understanding its behaviour and roles in fragile ecosystems. However, due to the intrinsic complexity and variability of trees, the parametric identification of such models is currently a major obstacle to their evaluation and their validation with respect to real data. The aim of this paper was to present the mathematical framework of a stochastic functional-structural model (GL2) and its parameterization for Mongolian Scots pines, taking into account inter-plant variability in terms of topological development and biomass partitioning. In GL2, plant organogenesis is determined by the realization of random variables representing the behaviour of axillary or apical buds. The associated probabilities are calibrated for Mongolian Scots pines using experimental data including means and variances of the numbers of organs per plant in each order-based class. The functional part of the model relies on the principles of source-sink regulation and is parameterized by direct observations of living trees and the inversion method using measured data for organ mass and dimensions. The final calibration accuracy satisfies both organogenetic and morphogenetic processes. Our hypothesis for the number of organs following a binomial distribution is found to be consistent with the real data. Based on the calibrated parameters, stochastic simulations of the growth of Mongolian Scots pines in plantations are generated by the Monte Carlo method, allowing analysis of the inter-individual variability of the number of organs and biomass partitioning. Three-dimensional (3D) architectures of young Mongolian Scots pines were simulated for 4-, 6- and 8-year-old trees. This work provides a new method for characterizing tree structures and biomass allocation that can be used to build a 3D virtual Mongolian Scots pine forest. The work paves the way for bridging the gap between a single-plant model and a stand model.
Stochastic simulation of karst conduit networks
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.; Xu, Chaoshui; Durán-Valsero, Juan José
2012-01-01
Karst aquifers have very high spatial heterogeneity. Essentially, they comprise a system of pipes (i.e., the network of conduits) superimposed on rock porosity and on a network of stratigraphic surfaces and fractures. This heterogeneity strongly influences the hydraulic behavior of the karst and it must be reproduced in any realistic numerical model of the karst system that is used as input to flow and transport modeling. However, the directly observed karst conduits are only a small part of the complete karst conduit system and knowledge of the complete conduit geometry and topology remains spatially limited and uncertain. Thus, there is a special interest in the stochastic simulation of networks of conduits that can be combined with fracture and rock porosity models to provide a realistic numerical model of the karst system. Furthermore, the simulated model may be of interest per se and other uses could be envisaged. The purpose of this paper is to present an efficient method for conditional and non-conditional stochastic simulation of karst conduit networks. The method comprises two stages: generation of conduit geometry and generation of topology. The approach adopted is a combination of a resampling method for generating conduit geometries from templates and a modified diffusion-limited aggregation method for generating the network topology. The authors show that the 3D karst conduit networks generated by the proposed method are statistically similar to observed karst conduit networks or to a hypothesized network model. The statistical similarity is in the sense of reproducing the tortuosity index of conduits, the fractal dimension of the network, the direction rose of directions, the Z-histogram and Ripley's K-function of the bifurcation points (which differs from a random allocation of those bifurcation points). The proposed method (1) is very flexible, (2) incorporates any experimental data (conditioning information) and (3) can easily be modified when implemented in a hydraulic inverse modeling procedure. Several synthetic examples are given to illustrate the methodology and real conduit network data are used to generate simulated networks that mimic real geometries and topology.
NASA Astrophysics Data System (ADS)
Kaplan, J.; Howitt, R. E.; Kroll, S.
2016-12-01
Public financing of public projects is becoming more difficult with growing political and financial pressure to reduce the size and scope of government action. Private provision is possible but is often doomed by under-provision. If however, market-like mechanisms could be incorporated into the solicitation of funds to finance the provision of the good, because, for example, the good is supplied stochastically and is divisible, then we would expect fewer incentives to free ride and greater efficiency in providing the public good. In a controlled computer-based economic experiment, we evaluate two market-like conditions (reliability pricing allocation and self-sizing of the good) that are designed to reduce under-provision. The results suggest that financing an infrastructure project when the delivery is allocated based on reliability pricing rather than historical allocation results in significantly greater price formation efficiency and less free riding whether the project is of a fixed size determined by external policy makers or determined endogenously by the sum of private contributions. When reliability pricing and self-sizing (endogenous) mechanism are used in combination free-riding is reduced the greatest among the tested treatments. Furthermore, and as expected, self-sizing when combined with historical allocations results in the worst level of free-riding. This setting for this treatment creates an incentive to undervalue willingness to pay since very low contributions still return positive earnings as long as enough contributions are raised for a single unit. If everyone perceives everyone else is undervaluing their contribution the incentive grows stronger and we see the greatest degree of free riding among the treatments. Lastly, the results from the analysis suggested that the rebate rule may have encouraged those with willingness to pay values less than the cost of the project to feel confident when contributing more than their willingness to pay and to do so when they faced the endogenously-sized, reliability pricing solicitation since a rebate would likely return them positive earnings. In subsequent research we would like to explore the role of the rebate rule in the effectiveness of reliability pricing and self-sizing in increasing price-formation efficiency and reduce free riding.
NASA Astrophysics Data System (ADS)
Prada, Jose Fernando
Keeping a contingency reserve in power systems is necessary to preserve the security of real-time operations. This work studies two different approaches to the optimal allocation of energy and reserves in the day-ahead generation scheduling process. Part I presents a stochastic security-constrained unit commitment model to co-optimize energy and the locational reserves required to respond to a set of uncertain generation contingencies, using a novel state-based formulation. The model is applied in an offer-based electricity market to allocate contingency reserves throughout the power grid, in order to comply with the N-1 security criterion under transmission congestion. The objective is to minimize expected dispatch and reserve costs, together with post contingency corrective redispatch costs, modeling the probability of generation failure and associated post contingency states. The characteristics of the scheduling problem are exploited to formulate a computationally efficient method, consistent with established operational practices. We simulated the distribution of locational contingency reserves on the IEEE RTS96 system and compared the results with the conventional deterministic method. We found that assigning locational spinning reserves can guarantee an N-1 secure dispatch accounting for transmission congestion at a reasonable extra cost. The simulations also showed little value of allocating downward reserves but sizable operating savings from co-optimizing locational nonspinning reserves. Overall, the results indicate the computational tractability of the proposed method. Part II presents a distributed generation scheduling model to optimally allocate energy and spinning reserves among competing generators in a day-ahead market. The model is based on the coordination between individual generators and a market entity. The proposed method uses forecasting, augmented pricing and locational signals to induce efficient commitment of generators based on firm posted prices. It is price-based but does not rely on multiple iterations, minimizes information exchange and simplifies the market clearing process. Simulations of the distributed method performed on a six-bus test system showed that, using an appropriate set of prices, it is possible to emulate the results of a conventional centralized solution, without need of providing make-whole payments to generators. Likewise, they showed that the distributed method can accommodate transactions with different products and complex security constraints.
Eckerström, Marie; Berg, Anne Ingeborg; Nordlund, Arto; Rolstad, Sindre; Sacuiu, Simona; Wallin, Anders
2016-01-01
Subjective cognitive impairment (SCI) is a trigger for seeking health care in a possible preclinical phase of Alzheimer's disease (AD), although the characteristics of SCI need clarification. We investigated the prevalence of psychosocial stress, depressive symptoms and CSF AD biomarkers in SCI and MCI (mild cognitive impairment). Memory clinic patients (SCI: n = 90; age: 59.8 ± 7.6 years; MCI: n = 160; age: 63.7 ± 7.0 years) included in the Gothenburg MCI study were examined at baseline. Variables were analyzed using logistic regression with SCI as dependent variable. Stress was more prevalent in SCI (51.1%) than MCI (23.1%); p < 0.0005. SCI patients had more previous depressive symptoms (p = 0.006), but showed no difference compared to MCI patients considering current depressive symptoms. A positive CSF AD profile was present in 14.4% of SCI patients and 35.0% of MCI patients (p = 0.001). Stress (p = 0.002), previous stress/depressive symptoms (p = 0.006) and a negative CSF AD profile (p = 0.036) predicted allocation to the SCI group. Psychosocial stress is more prevalent in SCI than previously acknowledged. The high prevalence and long-term occurrence of stress/depressive symptoms in SCI in combination with a low prevalence of altered CSF AD biomarkers strengthens the notion that AD is not the most likely etiology of SCI. © 2016 S. Karger AG, Basel.
Comparison of model microbial allocation parameters in soils of varying texture
NASA Astrophysics Data System (ADS)
Hagerty, S. B.; Slessarev, E.; Schimel, J.
2017-12-01
The soil microbial community decomposes the majority of carbon (C) inputs to the soil. However, not all of this C is respired—rather, a substantial portion of the carbon processed by microbes may remain stored in the soil. The balance between C storage and respiration is controlled by microbial turnover rates and C allocation strategies. These microbial community properties may depend on soil texture, which has the potential to influence both the nature and the fate of microbial necromass and extracellular products. To evaluate the role of texture on microbial turnover and C allocation, we sampled four soils from the University of California's Hastings Reserve that varied in texture (one silt loam, two sandy loam, and on clay soil), but support similar grassland plant communities. We added 14C- glucose to the soil and measured the concentration of the label in the carbon dioxide (CO2), microbial biomass, and extractable C pools over 7 weeks. The labeled biomass turned over the slowest in the clay soil; the concentration of labeled biomass was more than 1.5 times the concentration of the other soils after 8 weeks. The clay soil also had the lowest mineralization rate of the label, and mineralization slowed after two weeks. In contrast, in the sandier soils mineralization rates were higher and did not plateau until 5 weeks into the incubation period. We fit the 14C data to a microbial allocation model and estimated microbial parameters; assimilation efficiency, exudation, and biomass specific respiration and turnover for each soil. We compare these parameters across the soil texture gradient to assess the extent to which models may need to account for variability in microbial C allocation across soils of different texture. Our results suggest that microbial C turns over more slowly in high-clay soils than in sandy soils, and that C lost from microbial biomass is retained at higher rates in high-clay soils. Accounting for these differences in microbial allocation and carbon stabilization could improve model representations of C cycling across a range of soil types.
Husain, Sara; Kadir, Masood; Fatmi, Zafar
2007-01-23
Limited resources, whether public or private, demand prioritisation among competing needs to maximise productivity. With a substantial increase in the number of reported HIV cases, little work has been done to understand how resources have been distributed and what factors may have influenced allocation within the newly introduced Enhanced National AIDS Control Program of Pakistan. The objective of this study was to identify perceptions of decision makers about the process of resource allocation within Pakistan's Enhanced National AIDS Control Program. A qualitative study was undertaken and in-depth interviews of decision makers at provincial and federal levels responsible to allocate resources within the program were conducted. HIV was not considered a priority issue by all study participants and external funding for the program was thought to have been accepted because of poor foreign currency reserves and donor agency influence rather than local need. Political influences from the federal government and donor agencies were thought to manipulate distribution of funds within the program. These influences were thought to occur despite the existence of a well-laid out procedure to determine allocation of public resources. Lack of collaboration among departments involved in decision making, a pervasive lack of technical expertise, paucity of information and an atmosphere of ad hoc decision making were thought to reduce resistance to external pressures. Development of a unified program vision through a consultative process and advocacy is necessary to understand goals to be achieved, to enhance program ownership and develop consensus about how money and effort should be directed. Enhancing public sector expertise in planning and budgeting is essential not just for the program, but also to reduce reliance on external agencies for technical support. Strengthening available databases for effective decision making is required to make financial allocations based on real, rather than perceived needs. With a large part of HIV program funding dedicated to public-private partnerships, it becomes imperative to develop public sector capacity to administer contracts, coordinate and monitor activities of the non-governmental sector.
Husain, Sara; Kadir, Masood; Fatmi, Zafar
2007-01-01
Background Limited resources, whether public or private, demand prioritisation among competing needs to maximise productivity. With a substantial increase in the number of reported HIV cases, little work has been done to understand how resources have been distributed and what factors may have influenced allocation within the newly introduced Enhanced National AIDS Control Program of Pakistan. The objective of this study was to identify perceptions of decision makers about the process of resource allocation within Pakistan's Enhanced National AIDS Control Program. Methods A qualitative study was undertaken and in-depth interviews of decision makers at provincial and federal levels responsible to allocate resources within the program were conducted. Results HIV was not considered a priority issue by all study participants and external funding for the program was thought to have been accepted because of poor foreign currency reserves and donor agency influence rather than local need. Political influences from the federal government and donor agencies were thought to manipulate distribution of funds within the program. These influences were thought to occur despite the existence of a well-laid out procedure to determine allocation of public resources. Lack of collaboration among departments involved in decision making, a pervasive lack of technical expertise, paucity of information and an atmosphere of ad hoc decision making were thought to reduce resistance to external pressures. Conclusion Development of a unified program vision through a consultative process and advocacy is necessary to understand goals to be achieved, to enhance program ownership and develop consensus about how money and effort should be directed. Enhancing public sector expertise in planning and budgeting is essential not just for the program, but also to reduce reliance on external agencies for technical support. Strengthening available databases for effective decision making is required to make financial allocations based on real, rather than perceived needs. With a large part of HIV program funding dedicated to public-private partnerships, it becomes imperative to develop public sector capacity to administer contracts, coordinate and monitor activities of the non-governmental sector. PMID:17244371
The Link Between Physical Activity and Cognitive Dysfunction in Alzheimer Disease.
Phillips, Cristy; Baktir, Mehmet Akif; Das, Devsmita; Lin, Bill; Salehi, Ahmad
2015-07-01
Alzheimer disease (AD) is a primary cause of cognitive dysfunction in the elderly population worldwide. Despite the allocation of enormous amounts of funding and resources to studying this brain disorder, there are no effective pharmacological treatments for reducing the severity of pathology and restoring cognitive function in affected people. Recent reports on the failure of multiple clinical trials for AD have highlighted the need to diversify further the search for new therapeutic strategies for cognitive dysfunction. Thus, studies detailing the neuroprotective effects of physical activity (PA) on the brain in AD were reviewed, and mechanisms by which PA might mitigate AD-related cognitive decline were explored. A MEDLINE database search was used to generate a list of studies conducted between January 2007 and September 2014 (n=394). These studies, along with key references, were screened to identify those that assessed the effects of PA on AD-related biomarkers and cognitive function. The search was not limited on the basis of intensity, frequency, duration, or mode of activity. However, studies in which PA was combined with another intervention (eg, diet, pharmacotherapeutics, ovariectomy, cognitive training, behavioral therapy), and studies not written in English were excluded. Thirty-eight animal and human studies met entry criteria. Most of the studies suggested that PA attenuates neuropathology and positively affects cognitive function in AD. Although the literature lacked sufficient evidence to support precise PA guidelines, convergent evidence does suggest that the incorporation of regular PA into daily routines mitigates AD-related symptoms, especially when deployed earlier in the disease process. Here the protocols used to alter the progression of AD-related neuropathology and cognitive decline are highlighted, and the implications for physical therapist practice are discussed. © 2015 American Physical Therapy Association.
NASA Astrophysics Data System (ADS)
Wibowo, Agus Tri; Handayani, Naniek Utami
2017-11-01
Petrokimia Gresik is one of the largest fertilizer producer in Indonesia which has a cross-country network of supply chain and distribution throughout the archipelago, either in bulk fertilizer or in bag fertilizer. This research was conducted at PT. PG port which is the main point of the logistics activities in the firm itself, either loading or unloading. This research focus on the process of loading the in bag fertilizer. Problems that occur in this process are due to the inefficiency of the flow of the Supply Chain, caused by the presence of waste and non-value-added activities. The purpose of this study was to determine what kind of waste that occurs during the process, as well as suggestions for improvements using the concept of Lean Supply Chain and Value Stream Mapping, and look for the cause of the problem using the 5 Whys method. The most influential types of waste during the process stream is Waiting Time (20.42%), and Non-Value Added activies of 51.9%. By using 5Whys, the largest cause of waste found are the length of the truck waiting for the cargo, numbers of crane are already inproper, and the absence of the scheduling and charge allocation. Recommended solutions are scheduling and allocation, creation of special line in the warehouse, and supplying cranes with appropriate load speed. Based on improvement suggestions, total NVA predicted to be reduced to 59.8%.
Bichescu-Burian, D; Cerisier, C; Czekaj, A; Grempler, J; Hund, S; Jaeger, S; Schmid, P; Weithmann, G; Steinert, T
2017-01-01
In Germany, in-patient treatment of patients with depressive, neurotic, anxiety, and somatoform disorders (ICD-10 F3, F4) is carried out in different settings in psychiatry and psychosomatics. Which patient characteristics determine referral to one or the other specialty is a crucial question in mental health policy and is a matter of ongoing controversy. However, comparative data on patient populations are widely lacking. In the study of Treatment Pathways of Patients with Anxiety and Depression (PfAD study), a total of 320 patients with ICD-10 F3/F4 clinical diagnoses were consecutively recruited from four treatment settings (psychiatric depression ward, psychiatric crisis intervention ward, psychiatric day hospitals, or psychosomatic hospital units; 80 participants per setting) and investigated. In all treatment settings, patients with considerable severity of illness and chronicity were treated. Female gender, higher education, and higher income predicted referral to psychosomatic units; male gender, transfer from another hospital or emergency hospitalization, co-morbidity with a personality disorder, higher general psychiatric co-morbidity, and danger to self at admission predicted referral to psychiatric unit. Patients in psychosomatic units had neither more psychosomatic disorders nor more somatic problems. There is considerable overlap between the clientele of psychiatric and psychosomatic units. Referral and allocation appears to be determined by aspects of severity and social status.
Dynamically orthogonal field equations for stochastic flows and particle dynamics
2011-02-01
where uncertainty ‘lives’ as well as a system of Stochastic Di erential Equations that de nes how the uncertainty evolves in the time varying stochastic ... stochastic dynamical component that are both time and space dependent, we derive a system of field equations consisting of a Partial Differential Equation...a system of Stochastic Differential Equations that defines how the stochasticity evolves in the time varying stochastic subspace. These new
NASA Astrophysics Data System (ADS)
Lu, M.; Lall, U.
2013-12-01
In order to mitigate the impacts of climate change, proactive management strategies to operate reservoirs and dams are needed. A multi-time scale climate informed stochastic model is developed to optimize the operations for a multi-purpose single reservoir by simulating decadal, interannual, seasonal and sub-seasonal variability. We apply the model to a setting motivated by the largest multi-purpose dam in N. India, the Bhakhra reservoir on the Sutlej River, a tributary of the Indus. This leads to a focus on timing and amplitude of the flows for the monsoon and snowmelt periods. The flow simulations are constrained by multiple sources of historical data and GCM future projections, that are being developed through a NSF funded project titled 'Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoon Asia'. The model presented is a multilevel, nonlinear programming model that aims to optimize the reservoir operating policy on a decadal horizon and the operation strategy on an updated annual basis. The model is hierarchical, in terms of having a structure that two optimization models designated for different time scales are nested as a matryoshka doll. The two optimization models have similar mathematical formulations with some modifications to meet the constraints within that time frame. The first level of the model is designated to provide optimization solution for policy makers to determine contracted annual releases to different uses with a prescribed reliability; the second level is a within-the-period (e.g., year) operation optimization scheme that allocates the contracted annual releases on a subperiod (e.g. monthly) basis, with additional benefit for extra release and penalty for failure. The model maximizes the net benefit of irrigation, hydropower generation and flood control in each of the periods. The model design thus facilitates the consistent application of weather and climate forecasts to improve operations of reservoir systems. The decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.
Effects of weight training on cognitive functions in elderly with Alzheimer's disease
Vital, Thays Martins; Hernández, Salma S. Soleman; Pedroso, Renata Valle; Teixeira, Camila Vieira Ligo; Garuffi, Marcelo; Stein, Angelica Miki; Costa, José Luiz Riani; Stella, Florindo
2012-01-01
Deterioration in cognitive functions is characteristic in Alzheimer's disease (AD) and may be associated with decline in daily living activities with consequent reduced quality of life. Objective To analyze weight training effects on cognitive functions in elderly with AD. Subjects 34 elderly with AD were allocated into two groups: Training Group (TG) and Social Gathering Group (SGG). Methods Global cognitive status was determined using the Mini-Mental State Exam. Specific cognitive functions were measured using the Brief Cognitive Battery, Clock Drawing Test and Verbal Fluency Test. The protocols were performed three times a week, one hour per session. The weight training protocol consisted of three sets of 20 repetitions, with two minutes of rest between sets and exercises. The activities proposed for the SGG were not systematized and aimed at promoting social interaction among patients. The statistical analyses were performed with the U Mann Whitney and Wilcoxon tests for group comparisons. All analyses were considered statistically significant at a p-value of 0.05. Results There were no significant differences associated to the effects of the practice of weight training on cognition in AD patients. Conclusion In this study, no improvement in cognitive functions was evident in elderly with AD who followed a low intensity resistance exercise protocol. Thus, future studies could evaluate the effect of more intense exercise programs. PMID:29213805
NASA Astrophysics Data System (ADS)
Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.
2014-06-01
A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, K. N.; Takano, Y.; He, Cenlin
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less
A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators
2016-01-01
The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539
Metallic-thin-film instability with spatially correlated thermal noise.
Diez, Javier A; González, Alejandro G; Fernández, Roberto
2016-01-01
We study the effects of stochastic thermal fluctuations on the instability of the free surface of a flat liquid metallic film on a solid substrate. These fluctuations are represented by a stochastic noise term added to the deterministic equation for the film thickness within the long-wave approximation. Unlike the case of polymeric films, we find that this noise, while remaining white in time, must be colored in space, at least in some regimes. The corresponding noise term is characterized by a nonzero correlation length, ℓ_{c}, which, combined with the size of the system, leads to a dimensionless parameter β that accounts for the relative importance of the spatial correlation (β∼ℓ_{c}^{-1}). We perform the linear stability analysis (LSA) of the film both with and without the noise term and find that for ℓ_{c} larger than some critical value (depending on the system size), the wavelength of the peak of the spectrum is larger than that corresponding to the deterministic case, while for smaller ℓ_{c} this peak corresponds to smaller wavelength than the latter. Interestingly, whatever the value of ℓ_{c}, the peak always approaches the deterministic one for larger times. We compare LSA results with the numerical simulations of the complete nonlinear problem and find a good agreement in the power spectra for early times at different values of β. For late times, we find that the stochastic LSA predicts well the position of the dominant wavelength, showing that nonlinear interactions do not modify the trends of the early linear stages. Finally, we fit the theoretical spectra to experimental data from a nanometric laser-melted copper film and find that at later times, the adjustment requires smaller values of β (larger space correlations).
Metallic-thin-film instability with spatially correlated thermal noise
NASA Astrophysics Data System (ADS)
Diez, Javier A.; González, Alejandro G.; Fernández, Roberto
2016-01-01
We study the effects of stochastic thermal fluctuations on the instability of the free surface of a flat liquid metallic film on a solid substrate. These fluctuations are represented by a stochastic noise term added to the deterministic equation for the film thickness within the long-wave approximation. Unlike the case of polymeric films, we find that this noise, while remaining white in time, must be colored in space, at least in some regimes. The corresponding noise term is characterized by a nonzero correlation length, ℓc, which, combined with the size of the system, leads to a dimensionless parameter β that accounts for the relative importance of the spatial correlation (β ˜ℓc-1 ). We perform the linear stability analysis (LSA) of the film both with and without the noise term and find that for ℓc larger than some critical value (depending on the system size), the wavelength of the peak of the spectrum is larger than that corresponding to the deterministic case, while for smaller ℓc this peak corresponds to smaller wavelength than the latter. Interestingly, whatever the value of ℓc, the peak always approaches the deterministic one for larger times. We compare LSA results with the numerical simulations of the complete nonlinear problem and find a good agreement in the power spectra for early times at different values of β . For late times, we find that the stochastic LSA predicts well the position of the dominant wavelength, showing that nonlinear interactions do not modify the trends of the early linear stages. Finally, we fit the theoretical spectra to experimental data from a nanometric laser-melted copper film and find that at later times, the adjustment requires smaller values of β (larger space correlations).
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2016-01-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2015-07-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.
The stochastic dance of early HIV infection
NASA Astrophysics Data System (ADS)
Merrill, Stephen J.
2005-12-01
The stochastic nature of early HIV infection is described in a series of models, each of which captures aspects of the dance of HIV during the early stages of infection. It is to this highly variable target that the immune response must respond. The adaptability of the various components of the immune response is an important aspect of the system's operation, as the nature of the pathogens that the response will be required to respond to and the order in which those responses must be made cannot be known beforehand. As HIV infection has direct influence over cells responsible for the immune response, the dance predicts that the immune response will be also in a variable state of readiness and capability for this task of adaptation. The description of the stochastic dance of HIV here will use the tools of stochastic models, and for the most part, simulation. The justification for this approach is that the early stages and the development of HIV diversity require that the model to be able to describe both individual sample path and patient-to-patient variability. In addition, as early viral dynamics are best described using branching processes, the explosive growth of these models both predicts high variability and rapid response of HIV to changes in system parameters.In this paper, a basic viral growth model based on a time dependent continuous-time branching process is used to describe the growth of HIV infected cells in the macrophage and lymphocyte populations. Immigration from the reservoir population is added to the basic model to describe the incubation time distribution. This distribution is deduced directly from the modeling assumptions and the model of viral growth. A system of two branching processes, one in the infected macrophage population and one in the infected lymphocyte population is used to provide a description of the relationship between the development of HIV diversity as it relates to tropism (host cell preference). The role of the immune response to HIV and HIV infected cells is used to describe the movement of the infection from a few infected macrophages to a disease of infected CD4+ T lymphocytes.
NASA Astrophysics Data System (ADS)
Li, Linlin; Switzer, Adam D.; Wang, Yu; Chan, Chung-Han; Qiu, Qiang; Weiss, Robert
2017-04-01
Current tsunami inundation maps are commonly generated using deterministic scenarios, either for real-time forecasting or based on hypothetical "worst-case" events. Such maps are mainly used for emergency response and evacuation planning and do not include the information of return period. However, in practice, probabilistic tsunami inundation maps are required in a wide variety of applications, such as land-use planning, engineer design and for insurance purposes. In this study, we present a method to develop the probabilistic tsunami inundation map using a stochastic earthquake source model. To demonstrate the methodology, we take Macau a coastal city in the South China Sea as an example. Two major advances of this method are: it incorporates the most updated information of seismic tsunamigenic sources along the Manila megathrust; it integrates a stochastic source model into a Monte Carlo-type simulation in which a broad range of slip distribution patterns are generated for large numbers of synthetic earthquake events. When aggregated the large amount of inundation simulation results, we analyze the uncertainties associated with variability of earthquake rupture location and slip distribution. We also explore how tsunami hazard evolves in Macau in the context of sea level rise. Our results suggest Macau faces moderate tsunami risk due to its low-lying elevation, extensive land reclamation, high coastal population and major infrastructure density. Macau consists of four districts: Macau Peninsula, Taipa Island, Coloane island and Cotai strip. Of these Macau Peninsula is the most vulnerable to tsunami due to its low-elevation and exposure to direct waves and refracted waves from the offshore region and reflected waves from mainland. Earthquakes with magnitude larger than Mw8.0 in the northern Manila trench would likely cause hazardous inundation in Macau. Using a stochastic source model, we are able to derive a spread of potential tsunami impacts for earthquakes with the same magnitude. The diversity is caused by both random rupture locations and heterogeneous slip distribution. Adding the sea level rise component, the inundated depth caused by 1 m sea level rise is equivalent to the one caused by 90 percentile of an ensemble of Mw8.4 earthquakes.
Simple map in action-angle coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerwin, Olivia; Punjabi, Alkesh; Ali, Halima
A simple map [A. Punjabi, A. Verma, and A. Boozer, Phys. Rev. Lett. 69, 3322 (1992)] is the simplest map that has the topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. Here, action-angle coordinates, the safety factor, and the equilibrium generating function for the simple map are calculated analytically. The simple map in action-angle coordinates is derived from canonical transformations. This map cannot be integrated across the separatrix surface because of the singularity in the safety factor there. The stochastic broadening of the ideal separatrix surface in action-angle representationmore » is calculated by adding a perturbation to the simple map equilibrium generating function. This perturbation represents the spatial noise and field errors typical of the DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)] tokamak. The stationary Fourier modes of the perturbation have poloidal and toroidal mode numbers (m,n,)=((3,1),(4,1),(6,2),(7,2),(8,2),(9,3),(10,3),(11,3)) with amplitude {delta}=0.8x10{sup -5}. Near the X-point, about 0.12% of toroidal magnetic flux inside the separatrix, and about 0.06% of the poloidal flux inside the separatrix is lost. When the distance from the O-point to the X-point is 1 m, the width of stochastic layer near the X-point is about 1.4 cm. The average value of the action on the last good surface is 0.19072 compared to the action value of 3/5{pi} on the separatrix. The average width of stochastic layer in action coordinate is 2.7x10{sup -4}, while the average area of the stochastic layer in action-angle phase space is 1.69017x10{sup -3}. On average, about 0.14% of action or toroidal flux inside the ideal separatrix is lost due to broadening. Roughly five times more toroidal flux is lost in the simple map than in DIII-D for the same perturbation [A. Punjabi, H. Ali, A. Boozer, and T. Evans, Bull. Amer. Phys. Soc. 52, 124 (2007)].« less
Simple map in action-angle coordinates
NASA Astrophysics Data System (ADS)
Kerwin, Olivia; Punjabi, Alkesh; Ali, Halima
2008-07-01
A simple map [A. Punjabi, A. Verma, and A. Boozer, Phys. Rev. Lett. 69, 3322 (1992)] is the simplest map that has the topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Lett. A 364, 140 (2007)]. Here, action-angle coordinates, the safety factor, and the equilibrium generating function for the simple map are calculated analytically. The simple map in action-angle coordinates is derived from canonical transformations. This map cannot be integrated across the separatrix surface because of the singularity in the safety factor there. The stochastic broadening of the ideal separatrix surface in action-angle representation is calculated by adding a perturbation to the simple map equilibrium generating function. This perturbation represents the spatial noise and field errors typical of the DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)] tokamak. The stationary Fourier modes of the perturbation have poloidal and toroidal mode numbers (m,n,)={(3,1),(4,1),(6,2),(7,2),(8,2),(9,3),(10,3),(11,3)} with amplitude δ =0.8×10-5. Near the X-point, about 0.12% of toroidal magnetic flux inside the separatrix, and about 0.06% of the poloidal flux inside the separatrix is lost. When the distance from the O-point to the X-point is 1m, the width of stochastic layer near the X-point is about 1.4cm. The average value of the action on the last good surface is 0.19072 compared to the action value of 3/5π on the separatrix. The average width of stochastic layer in action coordinate is 2.7×10-4, while the average area of the stochastic layer in action-angle phase space is 1.69017×10-3. On average, about 0.14% of action or toroidal flux inside the ideal separatrix is lost due to broadening. Roughly five times more toroidal flux is lost in the simple map than in DIII-D for the same perturbation [A. Punjabi, H. Ali, A. Boozer, and T. Evans, Bull. Amer. Phys. Soc. 52, 124 (2007)].
Accurate segmentation framework for the left ventricle wall from cardiac cine MRI
NASA Astrophysics Data System (ADS)
Sliman, H.; Khalifa, F.; Elnakib, A.; Soliman, A.; Beache, G. M.; Gimel'farb, G.; Emam, A.; Elmaghraby, A.; El-Baz, A.
2013-10-01
We propose a novel, fast, robust, bi-directional coupled parametric deformable model to segment the left ventricle (LV) wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of LV wall to track the evolution of the parametric deformable models control points. To accurately estimate the marginal density of each deformable model control point, the empirical marginal grey level distributions (first-order appearance) inside and outside the boundary of the deformable model are modeled with adaptive linear combinations of discrete Gaussians (LCDG). The second order visual appearance of the LV wall is accurately modeled with a new rotationally invariant second-order Markov-Gibbs random field (MGRF). We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and AD value of 2.16±0.60 compared to two other level set methods that achieve 0.904±0.033 and 0.885±0.02 for DSC; and 2.86±1.35 and 5.72±4.70 for AD, respectively.
Stucke, Kathrin; Kieser, Meinhard
2012-12-10
In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.
Interrelation Between Safety Factors and Reliability
NASA Technical Reports Server (NTRS)
Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)
2001-01-01
An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.
Skinner, James E; Anchin, Jerry M; Weiss, Daniel N
2008-01-01
Heart rate variability (HRV) reflects both cardiac autonomic function and risk of arrhythmic death (AD). Reduced indices of HRV based on linear stochastic models are independent risk factors for AD in post-myocardial infarct cohorts. Indices based on nonlinear deterministic models have a significantly higher sensitivity and specificity for predicting AD in retrospective data. A need exists for nonlinear analytic software easily used by a medical technician. In the current study, an automated nonlinear algorithm, the time-dependent point correlation dimension (PD2i), was evaluated. The electrocardiogram (ECG) data were provided through an National Institutes of Health-sponsored internet archive (PhysioBank) and consisted of all 22 malignant arrhythmia ECG files (VF/VT) and 22 randomly selected arrhythmia files as the controls. The results were blindly calculated by automated software (Vicor 2.0, Vicor Technologies, Inc., Boca Raton, FL) and showed all analyzable VF/VT files had PD2i < 1.4 and all analyzable controls had PD2i > 1.4. Five VF/VT and six controls were excluded because surrogate testing showed the RR-intervals to contain noise, possibly resulting from the low digitization rate of the ECGs. The sensitivity was 100%, specificity 85%, relative risk > 100; p < 0.01, power > 90%. Thus, automated heartbeat analysis by the time-dependent nonlinear PD2i-algorithm can accurately stratify risk of AD in public data made available for competitive testing of algorithms. PMID:18728829
Incorporating nurse absenteeism into staffing with demand uncertainty.
Maass, Kayse Lee; Liu, Boying; Daskin, Mark S; Duck, Mary; Wang, Zhehui; Mwenesi, Rama; Schapiro, Hannah
2017-03-01
Increased nurse-to-patient ratios are associated negatively with increased costs and positively with improved patient care and reduced nurse burnout rates. Thus, it is critical from a cost, patient safety, and nurse satisfaction perspective that nurses be utilized efficiently and effectively. To address this, we propose a stochastic programming formulation for nurse staffing that accounts for variability in the patient census and nurse absenteeism, day-to-day correlations among the patient census levels, and costs associated with three different classes of nursing personnel: unit, pool, and temporary nurses. The decisions to be made include: how many unit nurses to employ, how large a pool of cross-trained nurses to maintain, how to allocate the pool nurses on a daily basis, and how many temporary nurses to utilize daily. A genetic algorithm is developed to solve the resulting model. Preliminary results using data from a large university hospital suggest that the proposed model can save a four-unit pool hundreds of thousands of dollars annually as opposed to the crude heuristics the hospital currently employs.
Coordinating a supply chain with a loss-averse retailer and effort dependent demand.
Li, Liying; Wang, Yong
2014-01-01
This study investigates the channel coordination issue of a supply chain with a risk-neutral manufacturer and a loss-averse retailer facing stochastic demand that is sensitive to sales effort. Under the loss-averse newsvendor setting, a distribution-free gain/loss-sharing-and-buyback (GLB) contract has been shown to be able to coordinate the supply chain. However, we find that a GLB contract remains ineffective in managing the supply chain when retailer sales efforts influence the demand. To effectively coordinate the channel, we propose to combine a GLB contract with sales rebate and penalty (SRP) contract. In addition, we discover a special class of gain/loss contracts that can coordinate the supply chain and arbitrarily allocate the expected supply chain profit between the manufacturer and the retailer. We then analyze the effect of loss aversion on the retailer's decision-making behavior and supply chain performance. Finally, we perform a numerical study to illustrate the findings and gain additional insights.
Dynamics Of Human Motion The Case Study of an Examination Hall
NASA Astrophysics Data System (ADS)
Ogunjo, Samuel; Ajayi, Oluwaseyi; Fuwape, Ibiyinka; Dansu, Emmanuel
Human behaviour is difficult to characterize and generalize due to ITS complex nature. Advances in mathematical models have enabled human systems such as love interaction, alcohol abuse, admission problem to be described using models. This study investigates one of such problems, the dynamics of human motion in an examination hall with limited computer systems such that students write their examination in batches. The examination is characterized by time (t) allocated to each students and difficulty level (dl) associated with the examination. A stochastic model based on the difficulty level of the examination was developed for the prediction of student's motion around the examination hall. A good agreement was obtained between theoretical predictions and numerical simulation. The result obtained will help in better planning of examination session to maximize available resources. Furthermore, results obtained in the research can be extended to other areas such as banking hall, customer service points where available resources will be shared amongst many users.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2017-06-01
Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.
Coordinating a Supply Chain with a Loss-Averse Retailer and Effort Dependent Demand
Li, Liying
2014-01-01
This study investigates the channel coordination issue of a supply chain with a risk-neutral manufacturer and a loss-averse retailer facing stochastic demand that is sensitive to sales effort. Under the loss-averse newsvendor setting, a distribution-free gain/loss-sharing-and-buyback (GLB) contract has been shown to be able to coordinate the supply chain. However, we find that a GLB contract remains ineffective in managing the supply chain when retailer sales efforts influence the demand. To effectively coordinate the channel, we propose to combine a GLB contract with sales rebate and penalty (SRP) contract. In addition, we discover a special class of gain/loss contracts that can coordinate the supply chain and arbitrarily allocate the expected supply chain profit between the manufacturer and the retailer. We then analyze the effect of loss aversion on the retailer's decision-making behavior and supply chain performance. Finally, we perform a numerical study to illustrate the findings and gain additional insights. PMID:25197696
NASA Astrophysics Data System (ADS)
Del Rio Amador, Lenin; Lovejoy, Shaun
2017-04-01
Over the past ten years, a key advance in our understanding of atmospheric variability is the discovery that between the weather and climate regime lies an intermediate "macroweather" regime, spanning the range of scales from ≈10 days to ≈30 years. Macroweather statistics are characterized by two fundamental symmetries: scaling and the factorization of the joint space-time statistics. In the time domain, the scaling has low intermittency with the additional property that successive fluctuations tend to cancel. In space, on the contrary the scaling has high (multifractal) intermittency corresponding to the existence of different climate zones. These properties have fundamental implications for macroweather forecasting: a) the temporal scaling implies that the system has a long range memory that can be exploited for forecasting; b) the low temporal intermittency implies that mathematically well-established (Gaussian) forecasting techniques can be used; and c), the statistical factorization property implies that although spatial correlations (including teleconnections) may be large, if long enough time series are available, they are not necessarily useful in improving forecasts. Theoretically, these conditions imply the existence of stochastic predictability limits in our talk, we show that these limits apply to GCM's. Based on these statistical implications, we developed the Stochastic Seasonal and Interannual Prediction System (StocSIPS) for the prediction of temperature from regional to global scales and from one month to many years horizons. One of the main components of StocSIPS is the separation and prediction of both the internal and externally forced variabilities. In order to test the theoretical assumptions and consequences for predictability and predictions, we use 41 different CMIP5 model outputs from preindustrial control runs that have fixed external forcings: whose variability is purely internally generated. We first show that these statistical assumptions hold with relatively good accuracy and then we performed hindcasts at global and regional scales from monthly to annual time resolutions using StocSIPS. We obtained excellent agreement between the hindcast Mean Square Skill Score (MSSS) and the theoretical stochastic limits. We also show the application of StocSIPS to the prediction of average global temperature and compare our results with those obtained using multi-model ensemble approaches. StocSIPS has numerous advantages including a) higher MSSS for large time horizons, b) the from convergence to the real - not model - climate, c) much higher computational speed, d) no need for data assimilation, e) no ad hoc post processing and f) no need for downscaling.
Analytic continuation of quantum Monte Carlo data by stochastic analytical inference.
Fuchs, Sebastian; Pruschke, Thomas; Jarrell, Mark
2010-05-01
We present an algorithm for the analytic continuation of imaginary-time quantum Monte Carlo data which is strictly based on principles of Bayesian statistical inference. Within this framework we are able to obtain an explicit expression for the calculation of a weighted average over possible energy spectra, which can be evaluated by standard Monte Carlo simulations, yielding as by-product also the distribution function as function of the regularization parameter. Our algorithm thus avoids the usual ad hoc assumptions introduced in similar algorithms to fix the regularization parameter. We apply the algorithm to imaginary-time quantum Monte Carlo data and compare the resulting energy spectra with those from a standard maximum-entropy calculation.
FLBEIA : A simulation model to conduct Bio-Economic evaluation of fisheries management strategies
NASA Astrophysics Data System (ADS)
Garcia, Dorleta; Sánchez, Sonia; Prellezo, Raúl; Urtizberea, Agurtzane; Andrés, Marga
Fishery systems are complex systems that need to be managed in order to ensure a sustainable and efficient exploitation of marine resources. Traditionally, fisheries management has relied on biological models. However, in recent years the focus on mathematical models which incorporate economic and social aspects has increased. Here, we present FLBEIA, a flexible software to conduct bio-economic evaluation of fisheries management strategies. The model is multi-stock, multi-fleet, stochastic and seasonal. The fishery system is described as a sum of processes, which are internally assembled in a predetermined way. There are several functions available to describe the dynamic of each process and new functions can be added to satisfy specific requirements.
STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python
Wils, Stefan; Schutter, Erik De
2008-01-01
We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code. PMID:19623245
Bosonic Loop Diagrams as Perturbative Solutions of the Classical Field Equations in ϕ4-Theory
NASA Astrophysics Data System (ADS)
Finster, Felix; Tolksdorf, Jürgen
2012-05-01
Solutions of the classical ϕ4-theory in Minkowski space-time are analyzed in a perturbation expansion in the nonlinearity. Using the language of Feynman diagrams, the solution of the Cauchy problem is expressed in terms of tree diagrams which involve the retarded Green's function and have one outgoing leg. In order to obtain general tree diagrams, we set up a "classical measurement process" in which a virtual observer of a scattering experiment modifies the field and detects suitable energy differences. By adding a classical stochastic background field, we even obtain all loop diagrams. The expansions are compared with the standard Feynman diagrams of the corresponding quantum field theory.
Alzheimers disease: cost cuts call for novel drugs development and national strategy.
Marešová, Petra; Klímová, Blanka; Kuča, Kamil
Mental health affects the quality of life for a large number of individuals and family members. Currently, globally costs for people with dementia amount to more than 1% of gross domestic product (GDP). In the future, the growth of expenditure is expected with regard to the fact that the population of developed countries is aging and the dementia is closely associated with increasing age. It is evident that governments have to allocate adequate financial, material and human resources to address a health problem on this scale. The purpose of this article is to explore the current state of treatment and care of patients suffering from Alzheimers disease (AD), analyze direct and indirect health care costs resulting from this disease. In addition, the authors of this article draw attention to the implementation of astrategic plan which would handle all the aspects of AD, including the research of drugs development since nowadays there are not still many drugs which would improve AD patients state, particularly in the early phases, as well as there does not exist any well-functioning national strategic plan in the Czech Republic which would bring a radical improvement in reducing the effects of AD.Key words: Alzheimers disease costs treatment strategic plan.
East Europe Report, Economic and Industrial Affairs
1984-09-11
added (UW) , return on production assets ( RVF ), profit and exports. These categories characterize the results of the work of the whole organization...prescribed goals No of VHJ’s Increase No of VHJ’s Decrease UW (mil. of Kcs) 9 147.5 6 184.8 Profit (mil. of Kcs) 6 66.0 8 289.0 RVF /return on prod. 7... RVF (percentage) No of Workers ZSMP (mil. of Kcs) PSMP (mil. of Kcs) Average Wage (Kcs) Allocation from Profits to FR (mil. of Kcs) VHJ’s with
Dynamic Hierarchical Sleep Scheduling for Wireless Ad-Hoc Sensor Networks
Wen, Chih-Yu; Chen, Ying-Chih
2009-01-01
This paper presents two scheduling management schemes for wireless sensor networks, which manage the sensors by utilizing the hierarchical network structure and allocate network resources efficiently. A local criterion is used to simultaneously establish the sensing coverage and connectivity such that dynamic cluster-based sleep scheduling can be achieved. The proposed schemes are simulated and analyzed to abstract the network behaviors in a number of settings. The experimental results show that the proposed algorithms provide efficient network power control and can achieve high scalability in wireless sensor networks. PMID:22412343
Dynamic hierarchical sleep scheduling for wireless ad-hoc sensor networks.
Wen, Chih-Yu; Chen, Ying-Chih
2009-01-01
This paper presents two scheduling management schemes for wireless sensor networks, which manage the sensors by utilizing the hierarchical network structure and allocate network resources efficiently. A local criterion is used to simultaneously establish the sensing coverage and connectivity such that dynamic cluster-based sleep scheduling can be achieved. The proposed schemes are simulated and analyzed to abstract the network behaviors in a number of settings. The experimental results show that the proposed algorithms provide efficient network power control and can achieve high scalability in wireless sensor networks.
Bayesian Exploratory Factor Analysis
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517
Smith, Peter; Gravelle, Hugh; Martin, Steve; Bardsley, Martin; Rice, Nigel; Georghiou, Theo; Dusheiko, Mark; Billings, John; Lorenzo, Michael De; Sanderson, Colin
2011-01-01
Objectives To develop a formula for allocating resources for commissioning hospital care to all general practices in England based on the health needs of the people registered in each practice Design Multivariate prospective statistical models were developed in which routinely collected electronic information from 2005-6 and 2006-7 on individuals and the areas in which they lived was used to predict their costs of hospital care in the next year, 2007-8. Data on individuals included all diagnoses recorded at any inpatient admission. Models were developed on a random sample of 5 million people and validated on a second random sample of 5 million people and a third sample of 5 million people drawn from a random sample of practices. Setting All general practices in England as of 1 April 2007. All NHS inpatient admissions and outpatient attendances for individuals registered with a general practice on that date. Subjects All individuals registered with a general practice in England at 1 April 2007. Main outcome measures Power of the statistical models to predict the costs of the individual patient or each practice’s registered population for 2007-8 tested with a range of metrics (R2 reported here). Comparisons of predicted costs in 2007-8 with actual costs incurred in the same year were calculated by individual and by practice. Results Models including person level information (age, sex, and ICD-10 codes diagnostic recorded) and a range of area level information (such as socioeconomic deprivation and supply of health facilities) were most predictive of costs. After accounting for person level variables, area level variables added little explanatory power. The best models for resource allocation could predict upwards of 77% of the variation in costs at practice level, and about 12% at the person level. With these models, the predicted costs of about a third of practices would exceed or undershoot the actual costs by 10% or more. Smaller practices were more likely to be in these groups. Conclusions A model was developed that performed well by international standards, and could be used for allocations to practices for commissioning. The best formulas, however, could predict only about 12% of the variation in next year’s costs of most inpatient and outpatient NHS care for each individual. Person-based diagnostic data significantly added to the predictive power of the models. PMID:22110252
Quantum stochastic calculus associated with quadratic quantum noises
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr; Sinha, Kalyan B., E-mail: kbs-jaya@yahoo.co.in
2016-02-15
We first study a class of fundamental quantum stochastic processes induced by the generators of a six dimensional non-solvable Lie †-algebra consisting of all linear combinations of the generalized Gross Laplacian and its adjoint, annihilation operator, creation operator, conservation, and time, and then we study the quantum stochastic integrals associated with the class of fundamental quantum stochastic processes, and the quantum Itô formula is revisited. The existence and uniqueness of solution of a quantum stochastic differential equation is proved. The unitarity conditions of solutions of quantum stochastic differential equations associated with the fundamental processes are examined. The quantum stochastic calculusmore » extends the Hudson-Parthasarathy quantum stochastic calculus.« less
Salminen, Antero; Haapasalo, Annakaisa; Kauppinen, Anu; Kaarniranta, Kai; Soininen, Hilkka; Hiltunen, Mikko
2015-08-01
The amyloid cascade hypothesis for the pathogenesis of Alzheimer's disease (AD) was proposed over twenty years ago. However, the mechanisms of neurodegeneration and synaptic loss have remained elusive delaying the effective drug discovery. Recent studies have revealed that amyloid-β peptides as well as phosphorylated and fragmented tau proteins accumulate within mitochondria. This process triggers mitochondrial fission (fragmentation) and disturbs Krebs cycle function e.g. by inhibiting the activity of 2-oxoglutarate dehydrogenase. Oxidative stress, hypoxia and calcium imbalance also disrupt the function of Krebs cycle in AD brains. Recent studies on epigenetic regulation have revealed that Krebs cycle intermediates control DNA and histone methylation as well as histone acetylation and thus they have fundamental roles in gene expression. DNA demethylases (TET1-3) and histone lysine demethylases (KDM2-7) are included in the family of 2-oxoglutarate-dependent oxygenases (2-OGDO). Interestingly, 2-oxoglutarate is the obligatory substrate of 2-OGDO enzymes, whereas succinate and fumarate are the inhibitors of these enzymes. Moreover, citrate can stimulate histone acetylation via acetyl-CoA production. Epigenetic studies have revealed that AD is associated with changes in DNA methylation and histone acetylation patterns. However, the epigenetic results of different studies are inconsistent but one possibility is that they represent both coordinated adaptive responses and uncontrolled stochastic changes, which provoke pathogenesis in affected neurons. Here, we will review the changes observed in mitochondrial dynamics and Krebs cycle function associated with AD, and then clarify the mechanisms through which mitochondrial metabolites can control the epigenetic landscape of chromatin and induce pathological changes in AD. Copyright © 2015 Elsevier Ltd. All rights reserved.
Stochastic models for inferring genetic regulation from microarray gene expression data.
Tian, Tianhai
2010-03-01
Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.
Zhou, Xiaobing; Zhang, Yuanming; Niklas, Karl J.
2014-01-01
Background and Aims Biomass accumulation and allocation patterns are critical to quantifying ecosystem dynamics. However, these patterns differ among species, and they can change in response to nutrient availability even among genetically related individuals. In order to understand this complexity further, this study examined three ephemeral species (with very short vegetative growth periods) and three annual species (with significantly longer vegetative growth periods) in the Gurbantunggut Desert, north-western China, to determine their responses to different nitrogen (N) supplements under natural conditions. Methods Nitrogen was added to the soil at rates of 0, 0·5, 1·0, 3·0, 6·0 and 24·0 g N m−2 year−1. Plants were sampled at various intervals to measure relative growth rate and shoot and root dry mass. Key Results Compared with annuals, ephemerals grew more rapidly, increased shoot and root biomass with increasing N application rates and significantly decreased root/shoot ratios. Nevertheless, changes in the biomass allocation of some species (i.e. Erodium oxyrrhynchum) in response to the N treatment were largely a consequence of changes in overall plant size, which was inconsistent with an optimal partitioning model. An isometric log shoot vs. log root scaling relationship for the final biomass harvest was observed for each species and all annuals, while pooled data of three ephemerals showed an allometric scaling relationship. Conclusions These results indicate that ephemerals and annuals differ observably in their biomass allocation patterns in response to soil N supplements, although an isometric log shoot vs. log root scaling relationship was maintained across all species. These findings highlight that different life history strategies behave differently in response to N application even when interspecific scaling relationships remain nearly isometric. PMID:24287812
Choi, Young-Ho; Ross, Pablo; Velez, Isabel C; Macías-García, B; Riera, Fernando L; Hinrichs, Katrin
2015-07-01
Equine embryos develop in vitro in the presence of high glucose concentrations, but little is known about their requirements for development. We evaluated the effect of glucose concentrations in medium on blastocyst development after ICSI. In experiment 1, there were no significant differences in rates of blastocyst formation among embryos cultured in our standard medium (DMEM/F-12), which contained >16 mM glucose, and those cultured in a minimal-glucose embryo culture medium (<1 mM; Global medium, GB), with either 0 added glucose for the first 5 days, then 20 mM (0-20) or 20 mM for the entire culture period (20-20). In experiment 2, there were no significant differences in the rates of blastocyst development (31-46%) for embryos cultured in four glucose treatments in GB (0-10, 0-20, 5-10, or 5-20). Blastocysts were evaluated by immunofluorescence for lineage-specific markers. All cells stained positively for POU5F1. An inner cluster of cells was identified that included presumptive primitive endoderm cells (GATA6-positive) and presumptive epiblast (EPI) cells. The 5-20 treatment resulted in a significantly lower number of presumptive EPI-lineage cells than the 0-20 treatment did. GATA6-positive cells appeared to be allocated to the primitive endoderm independent of the formation of an inner cell mass, as was previously hypothesized for equine embryos. These data demonstrate that equine blastocyst development is not dependent on high glucose concentrations during early culture; rather, environmental glucose may affect cell allocation. They also present the first analysis of cell lineage allocation in in vitro-fertilized equine blastocysts. These findings expand our understanding of the factors that affect embryo development in the horse. © 2015 Society for Reproduction and Fertility.
Castelnuovo, Gianluca; Manzoni, Gian Mauro; Cuzziol, Paola; Cesa, Gian Luca; Corti, Stefania; Tuzzi, Cristina; Villa, Valentina; Liuzzi, Antonio; Petroni, Maria Letizia; Molinari, Enrico
2011-03-04
Obesity increases the risk of many health complications such as hypertension, coronary heart disease and type 2 diabetes, needs long-lasting treatment for effective results and involves high public and private care-costs. Therefore, it is imperative that enduring and low-cost clinical programs for obesity and related co-morbidities are developed and evaluated. Information and communication technologies (ICT) can help clinicians to deliver treatment in a cost-effective and time-saving manner to a large number of obese individuals with co-morbidities. To examine ad interim effectiveness of a 12-month multidisciplinary telecare intervention for weight loss provided to obese patients with type 2 diabetes. A single-center randomized controlled trial (TECNOB study) started in December 2008. At present, 72 obese patients with type 2 diabetes have been recruited and randomly allocated to the TECNOB program (n=37) or to a control condition (n=39). However, only 34 participants have completed at least the 3-month follow-up and have been included in this ad interim analysis. 21 out of them have reached also the 6-month follow-up and 13 have achieved the end of the program. Study is still on-going. All participants attended 1-month inpatient intensive program that involved individualized medical care, diet therapy, physical training and brief psychological counseling. At discharge, participants allocated to the TECNOB program were instructed to use a weight-loss web-site, a web-based videoconference tool, a dietary software installed into their cellular phones and an electronic armband measuring daily steps and energy expenditure. Weight and disordered eating-related behaviors and cognitions (EDI-2) at entry to hospital, at discharge from hospital, at 3,6 and 12 months. Ad interim analysis of data from 34 participants showed no statistically significant difference between groups in weight change at any time-point. However, within-group analysis revealed significant reductions of initial weight at discharge from hospital, at 3 months, at 6 months but not at 12 months. Control group had higher scores in Interpersonal distrust at 12 months. This ad interim findings revealed that the effect of the inpatient treatment was high and probably overwhelmed the effect of the TECNOB intervention. Much statistical power and long-term follow-up may enhance the probability to detect the TECNOB effect over and above the great one exerted by the inpatient program.
Effects of environmental sounds on the guessability of animated graphic symbols.
Harmon, Ashley C; Schlosser, Ralf W; Gygi, Brian; Shane, Howard C; Kong, Ying-Yee; Book, Lorraine; Macduff, Kelly; Hearn, Emilia
2014-12-01
Graphic symbols are a necessity for pre-literate children who use aided augmentative and alternative communication (AAC) systems (including non-electronic communication boards and speech generating devices), as well as for mobile technologies using AAC applications. Recently, developers of the Autism Language Program (ALP) Animated Graphics Set have added environmental sounds to animated symbols representing verbs in an attempt to enhance their iconicity. The purpose of this study was to examine the effects of environmental sounds (added to animated graphic symbols representing verbs) in terms of naming. Participants included 46 children with typical development between the ages of 3;0 to 3;11 (years;months). The participants were randomly allocated to a condition of symbols with environmental sounds or a condition without environmental sounds. Results indicated that environmental sounds significantly enhanced the naming accuracy of animated symbols for verbs. Implications in terms of symbol selection, symbol refinement, and future symbol development will be discussed.
Tajima, K; Zheng, F; Collange, O; Barthel, G; Thornton, S N; Longrois, D; Levy, B; Audibert, G; Malinovsky, J M; Mertes, P M
2013-11-01
Anaphylactic shock is a rare, but potentially lethal complication, combining life-threatening circulatory failure and massive fluid shifts. Treatment guidelines rely on adrenaline and volume expansion by intravenous fluids, but there is no solid evidence for the choice of one specific type of fluid over another. Our purpose was to compare the time to achieve target mean arterial pressure upon resuscitation using adrenaline alone versus adrenaline with different resuscitation fluids in an animal model and to compare the tissue oxygen pressures (PtiO2) with the various strategies. Twenty-five ovalbumin-sensitised Brown Norway rats were allocated to five groups after anaphylactic shock induction: vehicle (CON), adrenaline alone (AD), or adrenaline with isotonic saline (AD+IS), hydroxyethyl starch (AD+HES) or hypertonic saline (AD+HS). Time to reach a target mean arterial pressure value of 75 mmHg, cardiac output, skeletal muscle PtiO2, lactate/pyruvate ratio and cumulative doses of adrenaline were recorded. Non-treated rats died within 15 minutes. The target mean arterial pressure value was reached faster with AD+HES (median: 10 minutes, range: 7.5 to 12.5 minutes) and AD+IS (median: 17.5 minutes, range: 5 to 25 minutes) versus adrenaline alone (median: 25 minutes, range: 20-30 minutes). There were also reduced adrenaline requirements in these groups. The skeletal muscle PtiO2 was restored only in the AD+HES group. Although direct extrapolation to humans should be made with caution, our results support the combined use of adrenaline and volume expansion for resuscitation from anaphylactic shock. When used with adrenaline the most effective fluid was hydroxyethyl starch, whereas hypertonic saline was the least effective.
Tapia, Milagritos D; Sow, Samba O; Lyke, Kirsten E; Haidara, Fadima Cheick; Diallo, Fatoumata; Doumbia, Moussa; Traore, Awa; Coulibaly, Flanon; Kodio, Mamoudou; Onwuchekwa, Uma; Sztein, Marcelo B; Wahid, Rezwanul; Campbell, James D; Kieny, Marie-Paule; Moorthy, Vasee; Imoukhuede, Egeruan B; Rampling, Tommy; Roman, Francois; De Ryck, Iris; Bellamy, Abbie R; Dally, Len; Mbaya, Olivier Tshiani; Ploquin, Aurélie; Zhou, Yan; Stanley, Daphne A; Bailer, Robert; Koup, Richard A; Roederer, Mario; Ledgerwood, Julie; Hill, Adrian V S; Ballou, W Ripley; Sullivan, Nancy; Graham, Barney; Levine, Myron M
2016-01-01
The 2014 west African Zaire Ebola virus epidemic prompted worldwide partners to accelerate clinical development of replication-defective chimpanzee adenovirus 3 vector vaccine expressing Zaire Ebola virus glycoprotein (ChAd3-EBO-Z). We aimed to investigate the safety, tolerability, and immunogenicity of ChAd3-EBO-Z in Malian and US adults, and assess the effect of boosting of Malians with modified vaccinia Ankara expressing Zaire Ebola virus glycoprotein and other filovirus antigens (MVA-BN-Filo). In the phase 1, single-blind, randomised trial of ChAd3-EBO-Z in the USA, we recruited adults aged 18-65 years from the University of Maryland medical community and the Baltimore community. In the phase 1b, open-label and double-blind, dose-escalation trial of ChAd3-EBO-Z in Mali, we recruited adults 18-50 years of age from six hospitals and health centres in Bamako (Mali), some of whom were also eligible for a nested, randomised, double-blind, placebo-controlled trial of MVA-BN-Filo. For randomised segments of the Malian trial and for the US trial, we randomly allocated participants (1:1; block size of six [Malian] or four [US]; ARB produced computer-generated randomisation lists; clinical staff did randomisation) to different single doses of intramuscular immunisation with ChAd3-EBO-Z: Malians received 1 × 10(10) viral particle units (pu), 2·5 × 10(10) pu, 5 × 10(10) pu, or 1 × 10(11) pu; US participants received 1 × 10(10) pu or 1 × 10(11) pu. We randomly allocated Malians in the nested trial (1:1) to receive a single dose of 2 × 10(8) plaque-forming units of MVA-BN-Filo or saline placebo. In the double-blind segments of the Malian trial, investigators, clinical staff, participants, and immunology laboratory staff were masked, but the study pharmacist (MK), vaccine administrator, and study statistician (ARB) were unmasked. In the US trial, investigators were not masked, but participants were. Analyses were per protocol. The primary outcome was safety, measured with occurrence of adverse events for 7 days after vaccination. Both trials are registered with ClinicalTrials.gov, numbers NCT02231866 (US) and NCT02267109 (Malian). Between Oct 8, 2014, and Feb 16, 2015, we randomly allocated 91 participants in Mali (ten [11%] to 1 × 10(10) pu, 35 [38%] to 2·5 × 10(10) pu, 35 [38%] to 5 × 10(10) pu, and 11 [12%] to 1 × 10(11) pu) and 20 in the USA (ten [50%] to 1 × 10(10) pu and ten [50%] to 1 × 10(11) pu), and boosted 52 Malians with MVA-BN-Filo (27 [52%]) or saline (25 [48%]). We identified no safety concerns with either vaccine: seven (8%) of 91 participants in Mali (five [5%] received 5 × 10(10) and two [2%] received 1 × 10(11) pu) and four (20%) of 20 in the USA (all received 1 × 10(11) pu) given ChAd3-EBO-Z had fever lasting for less than 24 h, and 15 (56%) of 27 Malians boosted with MVA-BN-Filo had injection-site pain or tenderness. 1 × 10(11) pu single-dose ChAd3-EBO-Z could suffice for phase 3 efficacy trials of ring-vaccination containment needing short-term, high-level protection to interrupt transmission. MVA-BN-Filo boosting, although a complex regimen, could confer long-lived protection if needed (eg, for health-care workers). Wellcome Trust, Medical Research Council UK, Department for International Development UK, National Cancer Institute, Frederick National Laboratory for Cancer Research, Federal Funds from National Institute of Allergy and Infectious Diseases. Copyright © 2016 Tapia et al. Open Access article distributed under the terms of CC BY. Published by Elsevier Ltd.. All rights reserved.
Xiao, Zhu; Liu, Hongjing; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-11-04
In this paper, we investigate the coverage performance and energy efficiency of multi-tier heterogeneous cellular networks (HetNets) which are composed of macrocells and different types of small cells, i.e., picocells and femtocells. By virtue of stochastic geometry tools, we model the multi-tier HetNets based on a Poisson point process (PPP) and analyze the Signal to Interference Ratio (SIR) via studying the cumulative interference from pico-tier and femto-tier. We then derive the analytical expressions of coverage probabilities in order to evaluate coverage performance in different tiers and investigate how it varies with the small cells' deployment density. By taking the fairness and user experience into consideration, we propose a disjoint channel allocation scheme and derive the system channel throughput for various tiers. Further, we formulate the energy efficiency optimization problem for multi-tier HetNets in terms of throughput performance and resource allocation fairness. To solve this problem, we devise a linear programming based approach to obtain the available area of the feasible solutions. System-level simulations demonstrate that the small cells' deployment density has a significant effect on the coverage performance and energy efficiency. Simulation results also reveal that there exits an optimal small cell base station (SBS) density ratio between pico-tier and femto-tier which can be applied to maximize the energy efficiency and at the same time enhance the system performance. Our findings provide guidance for the design of multi-tier HetNets for improving the coverage performance as well as the energy efficiency.
Evolution and cell physiology. 2. The evolution of cell signaling: from mitochondria to Metazoa.
Blackstone, Neil W
2013-11-01
The history of life is a history of levels-of-selection transitions. Each transition requires mechanisms that mediate conflict among the lower-level units. In the origins of multicellular eukaryotes, cell signaling is one such mechanism. The roots of cell signaling, however, may extend to the previous major transition, the origin of eukaryotes. Energy-converting protomitochondria within a larger cell allowed eukaryotes to transcend the surface-to-volume constraints inherent in the design of prokaryotes. At the same time, however, protomitochondria can selfishly allocate energy to their own replication. Metabolic signaling may have mediated this principal conflict in several ways. Variation of the protomitochondria was constrained by stoichiometry and strong metabolic demand (state 3) exerted by the protoeukaryote. Variation among protoeukaryotes was increased by the sexual stage of the life cycle, triggered by weak metabolic demand (state 4), resulting in stochastic allocation of protomitochondria to daughter cells. Coupled with selection, many selfish protomitochondria could thus be removed from the population. Hence, regulation of states 3 and 4, as, for instance, provided by the CO2/soluble adenylyl cyclase/cAMP pathway in mitochondria, was critical for conflict mediation. Subsequently, as multicellular eukaryotes evolved, metabolic signaling pathways employed by eukaryotes to mediate conflict within cells could now be co-opted into conflict mediation between cells. For example, in some fungi, the CO2/soluble adenylyl cyclase/cAMP pathway regulates the transition from yeast to forms with hyphae. In animals, this pathway regulates the maturation of sperm. While the particular features (sperm and hyphae) are distinct, both may involve between-cell conflicts that required mediation.
Xiao, Zhu; Liu, Hongjing; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, we investigate the coverage performance and energy efficiency of multi-tier heterogeneous cellular networks (HetNets) which are composed of macrocells and different types of small cells, i.e., picocells and femtocells. By virtue of stochastic geometry tools, we model the multi-tier HetNets based on a Poisson point process (PPP) and analyze the Signal to Interference Ratio (SIR) via studying the cumulative interference from pico-tier and femto-tier. We then derive the analytical expressions of coverage probabilities in order to evaluate coverage performance in different tiers and investigate how it varies with the small cells’ deployment density. By taking the fairness and user experience into consideration, we propose a disjoint channel allocation scheme and derive the system channel throughput for various tiers. Further, we formulate the energy efficiency optimization problem for multi-tier HetNets in terms of throughput performance and resource allocation fairness. To solve this problem, we devise a linear programming based approach to obtain the available area of the feasible solutions. System-level simulations demonstrate that the small cells’ deployment density has a significant effect on the coverage performance and energy efficiency. Simulation results also reveal that there exits an optimal small cell base station (SBS) density ratio between pico-tier and femto-tier which can be applied to maximize the energy efficiency and at the same time enhance the system performance. Our findings provide guidance for the design of multi-tier HetNets for improving the coverage performance as well as the energy efficiency. PMID:27827917
Fox, Laurel R
2007-12-01
Species with known demographies may be used as proxies, or approximate models, to predict vital rates and ecological properties of target species that either have not been studied or are species for which data may be difficult to obtain. These extrapolations assume that model and target species with similar properties respond in the same ways to the same ecological factors, that they have similar population dynamics, and that the similarity of vital rates reflects analogous responses to the same factors. I used two rare, sympatric annual plants (sand gilia [Gilia tenuiflora arenaria] and Monterey spineflower [Chorizanthe pungens pungens]) to test these assumptions experimentally. The vital rates of these species are similar and strongly correlated with rainfall, and I added water and/or prevented herbivore access to experimental plots. Their survival and reproduction were driven by different, largely stochastic factors and processes: sand gilia by herbivory and Monterey spineflower by rainfall. Because the causal agents and processes generating similar demographic patterns were species specific, these results demonstrate, both theoretically and empirically, that it is critical to identify the ecological processes generating observed effects and that experimental manipulations are usually needed to determine causal mechanisms. Without such evidence to identify mechanisms, extrapolations among species may lead to counterproductive management and conservation practices.
Burgener, Matthias; Aboulfadl, Hanane; Labat, Gaël Charles; Bonin, Michel; Sommer, Martin; Sankolli, Ravish; Wübbenhorst, Michael; Hulliger, Jürg
2016-05-01
180° orientational disorder of molecular building blocks can lead to a peculiar spatial distribution of polar properties in molecular crystals. Here we present two examples [4-bromo-4'-nitrobiphenyl (BNBP) and 4-bromo-4'-cyanobiphenyl (BCNBP)] which develop into a bipolar final growth state. This means orientational disorder taking place at the crystal/nutrient interface produces domains of opposite average polarity for as-grown crystals. The spatial inhomogeneous distribution of polarity was investigated by scanning pyroelectric microscopy (SPEM), phase-sensitive second harmonic microscopy (PS-SHM) and selected volume X-ray diffraction (SVXD). As a result, the acceptor groups (NO2 or CN) are predominantly present at crystal surfaces. However, the stochastic process of polarity formation can be influenced by adding a symmetrical biphenyl to a growing system. For this case, Monte Carlo simulations predict an inverted net polarity compared with the growth of pure BNBP and BCNBP. SPEM results clearly demonstrate that 4,4'-dibromobiphenyl (DBBP) can invert the polarity for both crystals. Phenomena reported in this paper belong to the most striking processes seen for molecular crystals, demonstrated by a stochastic process giving rise to symmetry breaking. We encounter here further examples supporting the general thesis that monodomain polar molecular crystals for fundamental reasons cannot exist.
Application of TDCR-Geant4 modeling to standardization of 63Ni.
Thiam, C; Bobin, C; Chauvenet, B; Bouchard, J
2012-09-01
As an alternative to the classical TDCR model applied to liquid scintillation (LS) counting, a stochastic approach based on the Geant4 toolkit is presented for the simulation of light emission inside the dedicated three-photomultiplier detection system. To this end, the Geant4 modeling includes a comprehensive description of optical properties associated with each material constituting the optical chamber. The objective is to simulate the propagation of optical photons from their creation in the LS cocktail to the production of photoelectrons in the photomultipliers. First validated for the case of radionuclide standardization based on Cerenkov emission, the scintillation process has been added to a TDCR-Geant4 modeling using the Birks expression in order to account for the light-emission nonlinearity owing to ionization quenching. The scintillation yield of the commercial Ultima Gold LS cocktail has been determined from double-coincidence detection efficiencies obtained for (60)Co and (54)Mn with the 4π(LS)β-γ coincidence method. In this paper, the stochastic TDCR modeling is applied for the case of the standardization of (63)Ni (pure β(-)-emitter; E(max)=66.98 keV) and the activity concentration is compared with the result given by the classical model. Copyright © 2012 Elsevier Ltd. All rights reserved.
The role of backward cell migration in two-hit mutants' production in the stem cell niche.
Bollas, Audrey; Shahriyari, Leili
2017-01-01
It has been discovered that there are two stem cell groups in the intestinal crypts: central stem cells (CeSCs), which are at the very bottom of the crypt, and border stem cells (BSCs), which are located between CeSCs and transit amplifying cells (TAs). Moreover, backward cell migration from BSCs to CeSCs has been observed. Recently, a bi-compartmental stochastic model, which includes CeSCs and BSCs, has been developed to investigate the probability of two-hit mutant production in the stem cell niche. In this project, we improve this stochastic model by adding the probability of backward cell migration to the model. The model suggests that the probability of two-hit mutant production increases when the frequency of backward cell migration increases. Furthermore, a small non-zero probability of backward cell migration leads to the largest range of optimal values for the frequency of symmetric divisions and the portion of divisions at each stem cell compartment in terms of delaying 2-hit mutant production. Moreover, the probability of two-hit mutant production is more sensitive to the probability of symmetric divisions than to the rate of backward cell migrations. The highest probability of two-hit mutant production corresponds to the case when all stem cell's divisions are asymmetric.
The role of backward cell migration in two-hit mutants’ production in the stem cell niche
Bollas, Audrey
2017-01-01
It has been discovered that there are two stem cell groups in the intestinal crypts: central stem cells (CeSCs), which are at the very bottom of the crypt, and border stem cells (BSCs), which are located between CeSCs and transit amplifying cells (TAs). Moreover, backward cell migration from BSCs to CeSCs has been observed. Recently, a bi-compartmental stochastic model, which includes CeSCs and BSCs, has been developed to investigate the probability of two-hit mutant production in the stem cell niche. In this project, we improve this stochastic model by adding the probability of backward cell migration to the model. The model suggests that the probability of two-hit mutant production increases when the frequency of backward cell migration increases. Furthermore, a small non-zero probability of backward cell migration leads to the largest range of optimal values for the frequency of symmetric divisions and the portion of divisions at each stem cell compartment in terms of delaying 2-hit mutant production. Moreover, the probability of two-hit mutant production is more sensitive to the probability of symmetric divisions than to the rate of backward cell migrations. The highest probability of two-hit mutant production corresponds to the case when all stem cell’s divisions are asymmetric. PMID:28931019
Chen, Po-Yu
2014-01-01
The validness of the expiration dates (validity period) that manufacturers provide on food product labels is a crucial food safety problem. Governments must study how to use their authority by implementing fair awards and punishments to prompt manufacturers into adopting rigorous considerations, such as the effect of adopting new storage methods for extending product validity periods on expected costs. Assuming that a manufacturer sells fresh food or drugs, this manufacturer must respond to current stochastic demands at each unit of time to determine the purchase amount of products for sale. If this decision maker is capable and an opportunity arises, new packaging methods (e.g., aluminum foil packaging, vacuum packaging, high-temperature sterilization after glass packaging, or packaging with various degrees of dryness) or storage methods (i.e., adding desiccants or various antioxidants) can be chosen to extend the validity periods of products. To minimize expected costs, this decision maker must be aware of the processing costs of new storage methods, inventory standards, inventory cycle lengths, and changes in relationships between factors such as stochastic demand functions in a cycle. Based on these changes in relationships, this study established a mathematical model as a basis for discussing the aforementioned topics.
NASA Astrophysics Data System (ADS)
Omidvarborna, Hamid; Kumar, Ashok; Kim, Dong-Shik
2017-03-01
A stochastic simulation algorithm (SSA) approach is implemented with the components of a simplified biodiesel surrogate to predict NOx (NO and NO2) emission concentrations from the combustion of biodiesel. The main reaction pathways were obtained by simplifying the previously derived skeletal mechanisms, including saturated methyl decenoate (MD), unsaturated methyl 5-decanoate (MD5D), and n-decane (ND). ND was added to match the energy content and the C/H/O ratio of actual biodiesel fuel. The MD/MD5D/ND surrogate model was also equipped with H2/CO/C1 formation mechanisms and a simplified NOx formation mechanism. The predicted model results are in good agreement with a limited number of experimental data at low-temperature combustion (LTC) conditions for three different biodiesel fuels consisting of various ratios of unsaturated and saturated methyl esters. The root mean square errors (RMSEs) of predicted values are 0.0020, 0.0018, and 0.0025 for soybean methyl ester (SME), waste cooking oil (WCO), and tallow oil (TO), respectively. The SSA model showed the potential to predict NOx emission concentrations, when the peak combustion temperature increased through the addition of ultra-low sulphur diesel (ULSD) to biodiesel. The SSA method used in this study demonstrates the possibility of reducing the computational complexity in biodiesel emissions modelling.
Zhang, Lijuan; Wang, Lina; Wang, Run; Gao, Yuan; Che, Haoyue; Pan, Yonghua; Fu, Peng
2017-02-14
BACKGROUND This study was proposed to compare the efficacy and safety of GTM-1, Rapamycin (Rap), and Carbamazepine (CBZ) in managing Alzheimer disease (AD). The impact of the above mentioned therapeutic drugs on autophagy was also investigated in our study. MATERIAL AND METHODS Firstly, 3×Tg AD mice were randomly allocated into 4 groups (each group with 10 mice), in which AD mice were separately treated with dimethylsulfoxide (DMSO, vehicle group), GTM-1 (6 mg/kg), Rap (1 mg/kg), and CBZ (100 mg/kg). Then spatial memory and learning ability of mice was tested using the Morris water maze. Routine blood tests were performed to evaluate the toxicity of these drugs. Amyloid-β42 (Aβ42) concentration was detected by ELISA and immunohistochemistry. Proteins related to autophagy were detected by Western blot. RESULTS GTM-1, Rap, and CBZ significantly improved the spatial memory of 3×Tg AD mice compared to that in the vehicle group (all P<0.05). Moreover, this study revealed that CBZ dosage was related to toxicity in mice. All of the above drugs significantly increased the expression of LC3-II and reduced Aβ42 levels in hippocampi of 3×Tg AD mice (all P<0.05). On the other hand, neither GTM-1 nor CBZ had significant influence on the expression of proteins on the mTOR pathway. CONCLUSIONS GTM-1 can alleviate the AD syndrome by activating autophagy in a manner that is dependent on the mTOR pathway and it therefore can be considered as an alternative to Rap.
Prescription drug advertising: trends and implications.
Krupka, L R; Vener, A M
1985-01-01
Prescription drug advertisements which appeared in two leading American medical journals in 1972, 1977 and 1982 were analyzed to discover possible trends in advertising. The 5016 ads examined showed that ads for the diuretic-cardiovasculars, especially the beta-adrenergic blocking agents and the slow channel inhibitors, as well as the analgesics, had increased, while ads for the anti-infectives and tranquilizers had diminished. The average amount of space allocated for each ad had increased. On the average, most ads (69%) depicted neither male nor female patients in their graphics, and a trend of increased neutrality was observed. When the hormones were excluded, an average of 21% of the ads showed male patients and 10% showed females. Since a relationship was discerned between the leading drugs advertised and the leading prescriptions filled, it was concluded that advertising does have some effect on the prescribing behavior of practitioners. The findings suggest that great investment in advertising is necessary in order to achieve high levels of sales for such drugs as Valium (diazepam) which do not have a clear-cut ameliorative effect on a specific physiological condition. On the other hand, it was suggested that saturation advertising would not significantly enhance the sales of such drugs as Dyazide (triamterene and hydrochlorothiazide) because of its well established therapeutic value in the control of hypertension. Ten advertising companies, on the average, had purchased 67% of all advertising space and five had purchased almost half (47%). The same two pharmaceutical companies were among the top five advertisers and the same five were among the top ten for the three years studied.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Al Jaaly, Emad; Fiorentino, Francesca; Reeves, Barnaby C; Ind, Philip W; Angelini, Gianni D; Kemp, Scott; Shiner, Robert J
2013-10-01
We compared the efficacy of noninvasive ventilation with bilevel positive airway pressure added to usual care versus usual care alone in patients undergoing coronary artery bypass grafting. We performed a 2-group, parallel, randomized controlled trial. The primary outcome was time until fit for discharge. Secondary outcomes were partial pressure of carbon dioxide, forced expiratory volume in 1 second, atelectasis, adverse events, duration of intensive care stay, and actual postoperative stay. A total of 129 patients were randomly allocated to bilevel positive airway pressure (66) or usual care (63). Three patients allocated to bilevel positive airway pressure withdrew. The median duration of bilevel positive airway pressure was 16 hours (interquartile range, 11-19). The median duration of hospital stay until fit for discharge was 5 days for the bilevel positive airway pressure group (interquartile range, 4-6) and 6 days for the usual care group (interquartile range, 5-7; hazard ratio, 1.68; 95% confidence interval, 1.08-2.31; P = .019). There was no significant difference in duration of intensive care, actual postoperative stay, and mean percentage of predicted forced expiratory volume in 1 second on day 3. Mean partial pressure of carbon dioxide was significantly reduced 1 hour after bilevel positive airway pressure application, but there was no overall difference between the groups up to 24 hours. Basal atelectasis occurred in 15 patients (24%) in the usual care group and 2 patients (3%) in the bilevel positive airway pressure group. Overall, 30% of patients in the bilevel positive airway pressure group experienced an adverse event compared with 59% in the usual care group. Among patients undergoing elective coronary artery bypass grafting, the use of bilevel positive airway pressure at extubation reduced the recovery time. Supported by trained staff, more than 75% of all patients allocated to bilevel positive airway pressure tolerated it for more than 10 hours. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.
Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258
Wang, Shui-Hua; Phillips, Preetha; Sui, Yuxiu; Liu, Bin; Yang, Ming; Cheng, Hong
2018-03-26
Alzheimer's disease (AD) is a progressive brain disease. The goal of this study is to provide a new computer-vision based technique to detect it in an efficient way. The brain-imaging data of 98 AD patients and 98 healthy controls was collected using data augmentation method. Then, convolutional neural network (CNN) was used, CNN is the most successful tool in deep learning. An 8-layer CNN was created with optimal structure obtained by experiences. Three activation functions (AFs): sigmoid, rectified linear unit (ReLU), and leaky ReLU. The three pooling-functions were also tested: average pooling, max pooling, and stochastic pooling. The numerical experiments demonstrated that leaky ReLU and max pooling gave the greatest result in terms of performance. It achieved a sensitivity of 97.96%, a specificity of 97.35%, and an accuracy of 97.65%, respectively. In addition, the proposed approach was compared with eight state-of-the-art approaches. The method increased the classification accuracy by approximately 5% compared to state-of-the-art methods.
Gamma Rhythm Simulations in Alzheimer's Disease
NASA Astrophysics Data System (ADS)
Montgomery, Samuel; Perez, Carlos; Ullah, Ghanim
The different neural rhythms that occur during the sleep-wake cycle regulate the brain's multiple functions. Memory acquisition occurs during fast gamma rhythms during consciousness, while slow oscillations mediate memory consolidation and erasure during sleep. At the neural network level, these rhythms are generated by the finely timed activity within excitatory and inhibitory neurons. In Alzheimer's Disease (AD) the function of inhibitory neurons is compromised due to an increase in amyloid beta (A β) leading to elevated sodium leakage from extracellular space in the hippocampus. Using a Hodgkin-Huxley formalism, heightened sodium leakage current into inhibitory neurons is observed to compromise functionality. Using a simple two neuron system it was observed that as the conductance of the sodium leakage current is increased in inhibitory neurons there is a significant decrease in spiking frequency regarding the membrane potential. This triggers a significant increase in excitatory spiking leading to aberrant network behavior similar to that seen in AD patients. The next step is to extend this model to a larger neuronal system with varying synaptic densities and conductance strengths as well as deterministic and stochastic drives.
Algorithms for optimizing cross-overs in DNA shuffling.
He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris
2012-03-21
DNA shuffling generates combinatorial libraries of chimeric genes by stochastically recombining parent genes. The resulting libraries are subjected to large-scale genetic selection or screening to identify those chimeras with favorable properties (e.g., enhanced stability or enzymatic activity). While DNA shuffling has been applied quite successfully, it is limited by its homology-dependent, stochastic nature. Consequently, it is used only with parents of sufficient overall sequence identity, and provides no control over the resulting chimeric library. This paper presents efficient methods to extend the scope of DNA shuffling to handle significantly more diverse parents and to generate more predictable, optimized libraries. Our CODNS (cross-over optimization for DNA shuffling) approach employs polynomial-time dynamic programming algorithms to select codons for the parental amino acids, allowing for zero or a fixed number of conservative substitutions. We first present efficient algorithms to optimize the local sequence identity or the nearest-neighbor approximation of the change in free energy upon annealing, objectives that were previously optimized by computationally-expensive integer programming methods. We then present efficient algorithms for more powerful objectives that seek to localize and enhance the frequency of recombination by producing "runs" of common nucleotides either overall or according to the sequence diversity of the resulting chimeras. We demonstrate the effectiveness of CODNS in choosing codons and allocating substitutions to promote recombination between parents targeted in earlier studies: two GAR transformylases (41% amino acid sequence identity), two very distantly related DNA polymerases, Pol X and β (15%), and beta-lactamases of varying identity (26-47%). Our methods provide the protein engineer with a new approach to DNA shuffling that supports substantially more diverse parents, is more deterministic, and generates more predictable and more diverse chimeric libraries.
Stochastic agent-based modeling of tuberculosis in Canadian Indigenous communities.
Tuite, Ashleigh R; Gallant, Victor; Randell, Elaine; Bourgeois, Annie-Claude; Greer, Amy L
2017-01-13
In Canada, active tuberculosis (TB) disease rates remain disproportionately higher among the Indigenous population, especially among the Inuit in the north. We used mathematical modeling to evaluate how interventions might enhance existing TB control efforts in a region of Nunavut. We developed a stochastic, agent-based model of TB transmission that captured the unique household and community structure. Evaluated interventions included: (i) rapid treatment of active cases; (ii) rapid contact tracing; (iii) expanded screening programs for latent TB infection (LTBI); and (iv) reduced household density. The outcomes of interest were incident TB infections and total diagnosed active TB disease over a 10- year time period. Model-projected incidence in the absence of additional interventions was highly variable (range: 33-369 cases) over 10 years. Compared to the 'no additional intervention' scenario, reducing the time between onset of active TB disease and initiation of treatment reduced both the number of new TB infections (47% reduction, relative risk of TB = 0.53) and diagnoses of active TB disease (19% reduction, relative risk of TB = 0.81). Expanding general population screening was also projected to reduce the burden of TB, although these findings were sensitive to assumptions around the relative amount of transmission occurring outside of households. Other potential interventions examined in the model (school-based screening, rapid contact tracing, and reduced household density) were found to have limited effectiveness. In a region of northern Canada experiencing a significant TB burden, more rapid treatment initiation in active TB cases was the most impactful intervention evaluated. Mathematical modeling can provide guidance for allocation of limited resources in a way that minimizes disease transmission and protects population health.
Hénaux, Viviane; Calavas, Didier
2017-01-01
Surveillance systems of exotic infectious diseases aim to ensure transparency about the country-specific animal disease situation (i.e. demonstrate disease freedom) and to identify any introductions. In a context of decreasing resources, evaluation of surveillance efficiency is essential to help stakeholders make relevant decisions about prioritization of measures and funding allocation. This study evaluated the efficiency (sensitivity related to cost) of the French bovine brucellosis surveillance system using stochastic scenario tree models. Cattle herds were categorized into three risk groups based on the annual number of purchases, given that trading is considered as the main route of brucellosis introduction in cattle herds. The sensitivity in detecting the disease and the costs of the current surveillance system, which includes clinical (abortion) surveillance, programmed serological testing and introduction controls, were compared to those of 19 alternative surveillance scenarios. Surveillance costs included veterinary fees and laboratory analyses. The sensitivity over a year of the current surveillance system was predicted to be 91±7% at a design prevalence of 0.01% for a total cost of 14.9±1.8 million €. Several alternative surveillance scenarios, based on clinical surveillance and random or risk-based serological screening in a sample (20%) of the population, were predicted to be at least as sensitive but for a lower cost. Such changes would reduce whole surveillance costs by 20 to 61% annually, and the costs for farmers only would be decreased from about 12.0 million € presently to 5.3-9.0 million € (i.e. 25-56% decrease). Besides, fostering the evolution of the surveillance system in one of these directions would be in agreement with the European regulations and farmers perceptions on brucellosis risk and surveillance.
Calavas, Didier
2017-01-01
Surveillance systems of exotic infectious diseases aim to ensure transparency about the country-specific animal disease situation (i.e. demonstrate disease freedom) and to identify any introductions. In a context of decreasing resources, evaluation of surveillance efficiency is essential to help stakeholders make relevant decisions about prioritization of measures and funding allocation. This study evaluated the efficiency (sensitivity related to cost) of the French bovine brucellosis surveillance system using stochastic scenario tree models. Cattle herds were categorized into three risk groups based on the annual number of purchases, given that trading is considered as the main route of brucellosis introduction in cattle herds. The sensitivity in detecting the disease and the costs of the current surveillance system, which includes clinical (abortion) surveillance, programmed serological testing and introduction controls, were compared to those of 19 alternative surveillance scenarios. Surveillance costs included veterinary fees and laboratory analyses. The sensitivity over a year of the current surveillance system was predicted to be 91±7% at a design prevalence of 0.01% for a total cost of 14.9±1.8 million €. Several alternative surveillance scenarios, based on clinical surveillance and random or risk-based serological screening in a sample (20%) of the population, were predicted to be at least as sensitive but for a lower cost. Such changes would reduce whole surveillance costs by 20 to 61% annually, and the costs for farmers only would be decreased from about 12.0 million € presently to 5.3–9.0 million € (i.e. 25–56% decrease). Besides, fostering the evolution of the surveillance system in one of these directions would be in agreement with the European regulations and farmers perceptions on brucellosis risk and surveillance. PMID:28859107
NASA Astrophysics Data System (ADS)
Brunsell, N. A.; Nippert, J. B.
2011-12-01
As the climate warms, it is generally acknowledged that the number and magnitude of extreme weather events will increase. We examined an ecophysiological model's responses to precipitation and temperature anomalies in relation to the mean and variance of annual precipitation along a pronounced precipitation gradient from eastern to western Kansas. This natural gradient creates a template of potential responses for both the mean and variance of annual precipitation to compare the timescales of carbon and water fluxes. Using data from several Ameriflux sites (KZU and KFS) and a third eddy covariance tower (K4B) along the gradient, BIOME-BGC was used to characterize water and carbon cycle responses to extreme weather events. Changes in the extreme value distributions were based on SRES A1B and A2 scenarios using an ensemble mean of 21 GCMs for the region, downscaled using a stochastic weather generator. We focused on changing the timing and magnitude of precipitation and altering the diurnal and seasonal temperature ranges. Biome-BGC was then forced with daily output from the stochastic weather generator, and we examined how potential changes in these extreme value distributions impact carbon and water cycling at the sites across the Kansas precipitation gradient at time scales ranging from daily to interannual. To decompose the time scales of response, we applied a wavelet based information theory analysis approach. Results indicate impacts in soil moisture memory and carbon allocation processes, which vary in response to both the mean and variance of precipitation along the precipitation gradient. These results suggest a more pronounced focus ecosystem responses to extreme events across a range of temporal scales in order to fully characterize the water and carbon cycle responses to global climate change.
Demand driven decision support for efficient water resources allocation in irrigated agriculture
NASA Astrophysics Data System (ADS)
Schuetze, Niels; Grießbach, Ulrike Ulrike; Röhm, Patric; Stange, Peter; Wagner, Michael; Seidel, Sabine; Werisch, Stefan; Barfus, Klemens
2014-05-01
Due to climate change, extreme weather conditions, such as longer dry spells in the summer months, may have an increasing impact on the agriculture in Saxony (Eastern Germany). For this reason, and, additionally, declining amounts of rainfall during the growing season the use of irrigation will be more important in future in Eastern Germany. To cope with this higher demand of water, a new decision support framework is developed which focuses on an integrated management of both irrigation water supply and demand. For modeling the regional water demand, local (and site-specific) water demand functions are used which are derived from the optimized agronomic response at farms scale. To account for climate variability the agronomic response is represented by stochastic crop water production functions (SCWPF) which provide the estimated yield subject to the minimum amount of irrigation water. These functions take into account the different soil types, crops and stochastically generated climate scenarios. By applying mathematical interpolation and optimization techniques, the SCWPF's are used to compute the water demand considering different constraints, for instance variable and fix costs or the producer price. This generic approach enables the computation for both multiple crops at farm scale as well as of the aggregated response to water pricing at a regional scale for full and deficit irrigation systems. Within the SAPHIR (SAxonian Platform for High Performance Irrigation) project a prototype of a decision support system is developed which helps to evaluate combined water supply and demand management policies for an effective and efficient utilization of water in order to meet future demands. The prototype is implemented as a web-based decision support system and it is based on a service-oriented geo-database architecture.
Logistical constraints lead to an intermediate optimum in outbreak response vaccination
Shea, Katriona; Ferrari, Matthew
2018-01-01
Dynamic models in disease ecology have historically evaluated vaccination strategies under the assumption that they are implemented homogeneously in space and time. However, this approach fails to formally account for operational and logistical constraints inherent in the distribution of vaccination to the population at risk. Thus, feedback between the dynamic processes of vaccine distribution and transmission might be overlooked. Here, we present a spatially explicit, stochastic Susceptible-Infected-Recovered-Vaccinated model that highlights the density-dependence and spatial constraints of various diffusive strategies of vaccination during an outbreak. The model integrates an agent-based process of disease spread with a partial differential process of vaccination deployment. We characterize the vaccination response in terms of a diffusion rate that describes the distribution of vaccination to the population at risk from a central location. This generates an explicit trade-off between slow diffusion, which concentrates effort near the central location, and fast diffusion, which spreads a fixed vaccination effort thinly over a large area. We use stochastic simulation to identify the optimum vaccination diffusion rate as a function of population density, interaction scale, transmissibility, and vaccine intensity. Our results show that, conditional on a timely response, the optimal strategy for minimizing outbreak size is to distribute vaccination resource at an intermediate rate: fast enough to outpace the epidemic, but slow enough to achieve local herd immunity. If the response is delayed, however, the optimal strategy for minimizing outbreak size changes to a rapidly diffusive distribution of vaccination effort. The latter may also result in significantly larger outbreaks, thus suggesting a benefit of allocating resources to timely outbreak detection and response. PMID:29791432
Effects of Extrinsic Mortality on the Evolution of Aging: A Stochastic Modeling Approach
Shokhirev, Maxim Nikolaievich; Johnson, Adiv Adam
2014-01-01
The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both classical and non-classical lifespan effects. PMID:24466165
Velasco, Vera Marjorie Elauria; Mansbridge, John; Bremner, Samantha; Carruthers, Kimberley; Summers, Peter S; Sung, Wilson W L; Champigny, Marc J; Weretilnyk, Elizabeth A
2016-08-01
Eutrema salsugineum, a halophytic relative of Arabidopsis thaliana, was subjected to varying phosphate (Pi) treatments. Arabidopsis seedlings grown on 0.05 mm Pi displayed shortened primary roots, higher lateral root density and reduced shoot biomass allocation relative to those on 0.5 mm Pi, whereas Eutrema seedlings showed no difference in lateral root density and shoot biomass allocation. While a low Fe concentration mitigated the Pi deficiency response for Arabidopsis, Eutrema root architecture was unaltered, but adding NaCl increased Eutrema lateral root density almost 2-fold. Eutrema and Arabidopsis plants grown on soil without added Pi for 4 weeks had low shoot and root Pi content. Pi-deprived, soil-grown Arabidopsis plants were stunted with senescing older leaves, whereas Eutrema plants were visually indistinguishable from 2.5 mm Pi-supplemented plants. Genes associated with Pi starvation were analysed by RT-qPCR. EsIPS2, EsPHT1;4 and EsPAP17 showed up-regulated expression in Pi-deprived Eutrema plants, while EsPHR1, EsWRKY75 and EsRNS1 showed no induction. Absolute quantification of transcripts indicated that PHR1, WRKY75 and RNS1 were expressed at higher levels in Eutrema plants relative to those in Arabidopsis regardless of external Pi. The low phenotypic plasticity Eutrema displays to Pi supply is consistent with adaptation to chronic Pi deprivation in its extreme natural habitat. © 2016 The Authors. Plant, Cell & Environment published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Boll, Matias G.
The objective of this study is to obtain a realistic evaluation of the potential role of microalgae as a biodiesel feedstock in a tropical setting. First, microalgae economics are estimated, including the detailed design of a 400 ha microalgae open pond production farm together with the microalgae biomass and crude oil production costs calculations. Sensitivity analysis and a stochastic evaluation of the microalgae venture chances for profit are also included. Next, microalgae potential for biodiesel production is compared to traditional oil crops such as soybeans and African palm. This comparison is performed using the Northeast Region (NER) of Brazil as background. Six potential biodiesel feedstock sources produced in the NER and microalgae are compared considering selected environmental, economic and social sustainability indicators. Finally, in the third chapter, the study proposes a cropland allocation model for the NER. The model aims to offer insights to the decision maker concerning biofuel development strategies and their impact on regional agricultural feedstock production. In the model, cropland allocation among three agriculture feedstock sectors, namely staple food, commodity export and biofuel is optimized through the use of the multiple objective technique referred to as compromise programming (CP). Our results indicate a projected microalgae total production cost of R 78,359 ha-1 (US43,533), which has a breakdown as follows: R 34,133 ha-1 (US18,963) for operating costs and R 44,226 ha-1 (US24,570) for overhead (ownership) costs. Our stochastic analysis indicates that microalgae production under the conditions assumed in the baseline scenario of this study has a 0% chance to present a positive NPV for a microalgae crude oil price of R 1.86. This price corresponds to an international oil price around US 77 bbl-1. To obtain a reasonable investment return (IRR = 12%) from the microalgae farm, an international oil price as high as US 461 bbl-1 is required. Despite the advantage of using about 14 times less cropland area (0.13 ha boe-1 ), microalgae presented significant disadvantages as compared to some of the traditional oil crops. Among these is the significant amount of N fertilizer and water demanded by microalgae production, namely 205 kg and 4,990 boe -1, about 132% and 30% higher than the second highest value among the crops compared in this study, respectively. Optimized CP scenarios expanded annual cropland allocation to 14.58 million ha in the NER, year 2017, compared to 11.04 and 12.81 million ha in current (2007) and baseline (2017) scenarios, respectively. In comparison to the baseline scenario, cropland expansions allied to the shift of the commodities export dedicated cropland to the biofuel production sector in CP scenarios significantly increased the NER fuel autonomy (95%) and reduced its R 5,126 million reais deficit baseline comprehensive feedstock trade balance by 79%. Contrary to the concerns usually referred to biofuel development, our model indicates that in the NER case, it is the commodity export, rather than the staple food agriculture feedstock production sector, that is mostly affected by the biofuel cropland allocation demand. When compared to traditional oil crops, microalgae-based biodiesel scenarios could not significantly improve regional staple food autonomy, increasing this objective by 1% only. The NER fuel autonomy, in its turn, is positively impacted in the microalgae scenarios, but the increment as compared to the traditional oil crops is rather small, namely 2% and 7% in the B5 and B10 levels, respectively. These results indicate that the potential advantages expected for the microalgae-based biodiesel introduction did not materialize for the NER. It is concluded that the adoption of microalgae-based biodiesel is not an interesting biofuel alternative for the NER of Brazil for the next ten years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qichun; Zhou, Jinglin; Wang, Hong
In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.
Optimal Control for Stochastic Delay Evolution Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Qingxin, E-mail: mqx@hutc.zj.cn; Shen, Yang, E-mail: skyshen87@gmail.com
2016-08-15
In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we applymore » stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.« less
Stochastic Community Assembly: Does It Matter in Microbial Ecology?
Zhou, Jizhong; Ning, Daliang
2017-12-01
Understanding the mechanisms controlling community diversity, functions, succession, and biogeography is a central, but poorly understood, topic in ecology, particularly in microbial ecology. Although stochastic processes are believed to play nonnegligible roles in shaping community structure, their importance relative to deterministic processes is hotly debated. The importance of ecological stochasticity in shaping microbial community structure is far less appreciated. Some of the main reasons for such heavy debates are the difficulty in defining stochasticity and the diverse methods used for delineating stochasticity. Here, we provide a critical review and synthesis of data from the most recent studies on stochastic community assembly in microbial ecology. We then describe both stochastic and deterministic components embedded in various ecological processes, including selection, dispersal, diversification, and drift. We also describe different approaches for inferring stochasticity from observational diversity patterns and highlight experimental approaches for delineating ecological stochasticity in microbial communities. In addition, we highlight research challenges, gaps, and future directions for microbial community assembly research. Copyright © 2017 American Society for Microbiology.
Liu, Meng; Wang, Ke
2010-12-07
This is a continuation of our paper [Liu, M., Wang, K., 2010. Persistence and extinction of a stochastic single-species model under regime switching in a polluted environment, J. Theor. Biol. 264, 934-944]. Taking both white noise and colored noise into account, a stochastic single-species model under regime switching in a polluted environment is studied. Sufficient conditions for extinction, stochastic nonpersistence in the mean, stochastic weak persistence and stochastic permanence are established. The threshold between stochastic weak persistence and extinction is obtained. The results show that a different type of noise has a different effect on the survival results. Copyright © 2010 Elsevier Ltd. All rights reserved.
Bryson, Steve; Thomson, Christy A; Risnes, Louise F; Dasgupta, Somnath; Smith, Kenneth; Schrader, John W; Pai, Emil F
2016-06-01
The human Ab response to certain pathogens is oligoclonal, with preferred IgV genes being used more frequently than others. A pair of such preferred genes, IGVK3-11 and IGVH3-30, contributes to the generation of protective Abs directed against the 23F serotype of the pneumonococcal capsular polysaccharide of Streptococcus pneumoniae and against the AD-2S1 peptide of the gB membrane protein of human CMV. Structural analyses of Fab fragments of mAbs 023.102 and pn132p2C05 in complex with portions of the 23F polysaccharide revealed five germline-encoded residues in contact with the key component, l-rhamnose. In the case of the AD-2S1 peptide, the KE5 Fab fragment complex identified nine germline-encoded contact residues. Two of these germline-encoded residues, Arg91L and Trp94L, contact both the l-rhamnose and the AD-2S1 peptide. Comparison of the respective paratopes that bind to carbohydrate and protein reveals that stochastic diversity in both CDR3 loops alone almost exclusively accounts for their divergent specificity. Combined evolutionary pressure by human CMV and the 23F serotype of S. pneumoniae acted on the IGVK3-11 and IGVH3-30 genes as demonstrated by the multiple germline-encoded amino acids that contact both l-rhamnose and AD-2S1 peptide. Copyright © 2016 by The American Association of Immunologists, Inc.
Maximum principle for a stochastic delayed system involving terminal state constraints.
Wen, Jiaqiang; Shi, Yufeng
2017-01-01
We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.
Momentum Maps and Stochastic Clebsch Action Principles
NASA Astrophysics Data System (ADS)
Cruzeiro, Ana Bela; Holm, Darryl D.; Ratiu, Tudor S.
2018-01-01
We derive stochastic differential equations whose solutions follow the flow of a stochastic nonlinear Lie algebra operation on a configuration manifold. For this purpose, we develop a stochastic Clebsch action principle, in which the noise couples to the phase space variables through a momentum map. This special coupling simplifies the structure of the resulting stochastic Hamilton equations for the momentum map. In particular, these stochastic Hamilton equations collectivize for Hamiltonians that depend only on the momentum map variable. The Stratonovich equations are derived from the Clebsch variational principle and then converted into Itô form. In comparing the Stratonovich and Itô forms of the stochastic dynamical equations governing the components of the momentum map, we find that the Itô contraction term turns out to be a double Poisson bracket. Finally, we present the stochastic Hamiltonian formulation of the collectivized momentum map dynamics and derive the corresponding Kolmogorov forward and backward equations.
Dynamics of non-holonomic systems with stochastic transport
NASA Astrophysics Data System (ADS)
Holm, D. D.; Putkaradze, V.
2018-01-01
This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.
Non-exponential kinetics of unfolding under a constant force.
Bell, Samuel; Terentjev, Eugene M
2016-11-14
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
Non-exponential kinetics of unfolding under a constant force
NASA Astrophysics Data System (ADS)
Bell, Samuel; Terentjev, Eugene M.
2016-11-01
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
Baden, Lindsey R; Karita, Etienne; Mutua, Gaudensia; Bekker, Linda-Gail; Gray, Glenda; Page-Shipp, Liesl; Walsh, Stephen R; Nyombayire, Julien; Anzala, Omu; Roux, Surita; Laher, Fatima; Innes, Craig; Seaman, Michael S; Cohen, Yehuda Z; Peter, Lauren; Frahm, Nicole; McElrath, M Juliana; Hayes, Peter; Swann, Edith; Grunenberg, Nicole; Grazia-Pau, Maria; Weijtens, Mo; Sadoff, Jerry; Dally, Len; Lombardo, Angela; Gilmour, Jill; Cox, Josephine; Dolin, Raphael; Fast, Patricia; Barouch, Dan H; Laufer, Dagna S
2016-03-01
A prophylactic HIV-1 vaccine is a global health priority. To assess a novel vaccine platform as a prophylactic HIV-1 regimen. Randomized, double-blind, placebo-controlled trial. Both participants and study personnel were blinded to treatment allocation. (ClinicalTrials.gov: NCT01215149). United States, East Africa, and South Africa. Healthy adults without HIV infection. 2 HIV-1 vaccines (adenovirus serotype 26 with an HIV-1 envelope A insert [Ad26.EnvA] and adenovirus serotype 35 with an HIV-1 envelope A insert [Ad35.Env], both administered at a dose of 5 × 1010 viral particles) in homologous and heterologous combinations. Safety and immunogenicity and the effect of baseline vector immunity. 217 participants received at least 1 vaccination, and 210 (>96%) completed follow-up. No vaccine-associated serious adverse events occurred. All regimens were generally well-tolerated. All regimens elicited humoral and cellular immune responses in nearly all participants. Preexisting Ad26- or Ad35-neutralizing antibody titers had no effect on vaccine safety and little effect on immunogenicity. In both homologous and heterologous regimens, the second vaccination significantly increased EnvA antibody titers (approximately 20-fold from the median enzyme-linked immunosorbent assay titers of 30-300 to 3000). The heterologous regimen of Ad26-Ad35 elicited significantly higher EnvA antibody titers than Ad35-Ad26. T-cell responses were modest and lower in East Africa than in South Africa and the United States. Because the 2 envelope inserts were not identical, the boosting responses were complex to interpret. Durability of the immune responses elicited beyond 1 year is unknown. Both vaccines elicited significant immune responses in all populations. Baseline vector immunity did not significantly affect responses. Second vaccinations in all regimens significantly boosted EnvA antibody titers, although vaccine order in the heterologous regimen had a modest effect on the immune response. International AIDS Vaccine Initiative, National Institutes of Health, Ragon Institute, Crucell Holland.
Time-ordered product expansions for computational stochastic system biology.
Mjolsness, Eric
2013-06-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
NGST Science Instruments and Process
NASA Technical Reports Server (NTRS)
Mather, John
1999-01-01
Possible NGST (Next Generation Space Telescope) instruments have been studied by NASA, ESA (European Space Agency), and CSA (Canadian Space Agency) teams, and their reports were presented at this meeting and published on the NGST web sites. The instrument capabilities will be evaluated by the Ad Hoc Science Working Group and the technical readiness will be reviewed by a technical panel. Recommendations will be made to the NASA Project Scientist, who will present a report for public comment. NASA. ESA, and the CSA will then allocate instrument, responsibilities in early 2000. NASA will choose its scientific investigations with instruments in 2002.
Variational principles for stochastic fluid dynamics
Holm, Darryl D.
2015-01-01
This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations. PMID:27547083
Universal fuzzy integral sliding-mode controllers for stochastic nonlinear systems.
Gao, Qing; Liu, Lu; Feng, Gang; Wang, Yong
2014-12-01
In this paper, the universal integral sliding-mode controller problem for the general stochastic nonlinear systems modeled by Itô type stochastic differential equations is investigated. One of the main contributions is that a novel dynamic integral sliding mode control (DISMC) scheme is developed for stochastic nonlinear systems based on their stochastic T-S fuzzy approximation models. The key advantage of the proposed DISMC scheme is that two very restrictive assumptions in most existing ISMC approaches to stochastic fuzzy systems have been removed. Based on the stochastic Lyapunov theory, it is shown that the closed-loop control system trajectories are kept on the integral sliding surface almost surely since the initial time, and moreover, the stochastic stability of the sliding motion can be guaranteed in terms of linear matrix inequalities. Another main contribution is that the results of universal fuzzy integral sliding-mode controllers for two classes of stochastic nonlinear systems, along with constructive procedures to obtain the universal fuzzy integral sliding-mode controllers, are provided, respectively. Simulation results from an inverted pendulum example are presented to illustrate the advantages and effectiveness of the proposed approaches.
Bitsika, Vicki; Sharpley, Christopher F; Sweeney, John A; McFarlane, James R
2014-03-29
Anxiety and Autistic Disorder (AD) are both neurological conditions and both disorders share some features that make it difficult to precisely allocate specific symptoms to each disorder. HPA and SAM axis activities have been conclusively associated with anxiety, and may provide a method of validating anxiety rating scale assessments given by parents and their children with AD about those children. Data from HPA axis (salivary cortisol) and SAM axis (salivary alpha amylase) responses were collected from a sample of 32 high-functioning boys (M age=11yr) with an Autistic Disorder (AD) and were compared with the boys' and their mothers' ratings of the boys' anxiety. There was a significant difference between the self-ratings given by the boys and ratings given about them by their mothers. Further, only the boys' self-ratings of their anxiety significantly predicted the HPA axis responses and neither were significantly related to SAM axis responses. Some boys showed cortisol responses which were similar to that previously reported in children who had suffered chronic and severe anxiety arising from stressful social interactions. As well as suggesting that some boys with an AD can provide valid self-assessments of their anxiety, these data also point to the presence of very high levels of chronic HPA-axis arousal and consequent chronic anxiety in these boys. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi
2015-01-01
We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.
Credit allocation for research institutes
NASA Astrophysics Data System (ADS)
Wang, J.-P.; Guo, Q.; Yang, K.; Han, J.-T.; Liu, J.-G.
2017-05-01
It is a challenging work to assess research performance of multiple institutes. Considering that it is unfair to average the credit to the institutes which is in the different order from a paper, in this paper, we present a credit allocation method (CAM) with a weighted order coefficient for multiple institutes. The results for the APS dataset with 18987 institutes show that top-ranked institutes obtained by the CAM method correspond to well-known universities or research labs with high reputation in physics. Moreover, we evaluate the performance of the CAM method when citation links are added or rewired randomly quantified by the Kendall's Tau and Jaccard index. The experimental results indicate that the CAM method has better performance in robustness compared with the total number of citations (TC) method and Shen's method. Finally, we give the first 20 Chinese universities in physics obtained by the CAM method. However, this method is valid for any other branch of sciences, not just for physics. The proposed method also provides universities and policy makers an effective tool to quantify and balance the academic performance of university.
Essock, Susan M; Drake, Robert E; Frank, Richard G; McGuire, Thomas G
2003-01-01
The purpose of clinical research is to answer this question: Would a new treatment, when added to the existing range of treatment options available in practice, help patients? Randomized controlled trials (RCTs)--in particular, double-blind RCTs--have important methodological advantages over observational studies for addressing this question. These advantages, however, come at a price. RCTs compare treatments using a particular allocation rule for assigning patients to treatments (random assignment) that does not mimic real-world practice. "Favorable" results from an RCT indicating that a new treatment is superior to existing treatments are neither necessary nor sufficient for establishing a "yes" answer to the question posed above. Modeled on an experimental design, RCTs are expensive in time and money and must compare simple differences in treatments. Findings have a high internal validity but may not address the needs of the field, particularly where treatment is complex and rapidly evolving. Design of clinical research needs to take account of the way treatments are allocated in actual practice and include flexible designs to answer important questions most effectively.
Meloche, K J; Fancher, B I; Emmerson, D A; Bilgili, S F; Dozier, W A
2018-05-01
An experiment was conducted to determine if myopathies of the Pectoralis major muscles are influenced by differences in growth trajectory achieved through a controlled feeding program. Male Yield Plus × Ross 708 broiler chicks were placed into 28 pens (25 birds/pen) equipped with plastic slats to prevent coprophagy. All birds received identical starter (1 to 10 d), grower (11 to 32 d), finisher (33 to 42 d), and withdrawal (43 to 50 d) diets that were formulated to meet or exceed nutrient recommendations of the primary breeder. Each pen of birds was randomly assigned to one of 4 pair-feeding programs (TRT 1: ad libitum; TRT 2: 95% of TRT 1 intake; TRT 3: 90% of TRT 1 intake; and TRT 4: 85% of TRT 1 intake) with 7 replicate pens per treatment. Feed intake and mortality were recorded daily. Individual BW was recorded at 31, 42, and 49 d of age. Blood samples were collected from 4 birds per pen at 31, 41, and 48 d of age and subsequently analyzed for plasma creatine kinase (CK) and lactate dehydrogenase (LDH). At 32, 43, and 50 d of age, 4 birds per pen were euthanized for necropsy. The right breast fillet of each bird was visually scored for white striping (WS) and wooden breast (WB). Linear decreases (P ≤ 0.01) in feed intake, BW gain, feed conversion ratio, and mortality were observed with decreasing feed allocation. Linear decreases (P ≤ 0.01) in severity were observed for WS and WB at 33, 43, and 50 d with decreasing feed allocation. Severity of WB at 33 and 43 d, as well as that of WS at 43 and 50 d, decreased (P ≤ 0.05) quadratically with decreasing feed allocation. Reduced feed allocation produced quadratic decreases (P ≤ 0.05) in CK and LDH concentrations at 31, 41, and 48 days. These results indicate that the incidence of breast fillet myopathies in broilers may be reduced through controlled feeding programs.
NASA Astrophysics Data System (ADS)
Ma, Y.; Xu, W.; Zhao, X.; Qin, L.
2016-12-01
Accurate location and allocation of earthquake emergency shelters is a key component of effective urban planning and emergency management. A number of models have been developed to solve the complex location-allocation problem with diverse and strict constraints, but there still remain a big gap between the model and the actual situation because the uncertainty of earthquake, damage rate of buildings and evacuee behaviors have been neglected or excessively simplified in the existing models. An innovative model was first developed to estimate the hourly dynamic changes of the number of evacuees under two damage scenarios of earthquake by considering these factors at the community level based on a location-based service data, and then followed by a multi-objective model for the allocation of residents to earthquake shelters using the central area of Beijing, China as a case study. The two objectives of this shelter allocation model were to minimize the total evacuation distance from communities to a specified shelter and to minimize the total area of all the shelters with the constraints of shelter capacity and service radius. The modified particle swarm optimization algorithm was used to solve this model. The results show that increasing the shelter area will result in a large decrease of the total evacuation distance in all of the schemes of the four scenarios (i.e., Scenario A and B in daytime and nighttime respectively). According to the schemes of minimum distance, parts of communities in downtown area needed to be reallocated due to the insufficient capacity of the nearest shelters, and the numbers of these communities sequentially decreased in scenarios Ad, An, Bd and Bn due to the decreasing population. According to the schemes of minimum area in each scenario, 27 or 28 shelters, covering a total area of approximately 37 km2, were selected; and the communities almost evacuated using the same routes in different scenarios. The results can be used as a scientific reference for the planning of shelters in Beijing.
Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I
2017-09-08
In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy within the context of limited resources. While the design is general enough to apply to many situations, future work is needed to address interim analyses and the incorporation of models for dose response.
NASA Technical Reports Server (NTRS)
Kushner, H. J.
1972-01-01
The field of stochastic stability is surveyed, with emphasis on the invariance theorems and their potential application to systems with randomly varying coefficients. Some of the basic ideas are reviewed, which underlie the stochastic Liapunov function approach to stochastic stability. The invariance theorems are discussed in detail.
Liu, Meng; Wang, Ke
2010-06-07
A new single-species model disturbed by both white noise and colored noise in a polluted environment is developed and analyzed. Sufficient criteria for extinction, stochastic nonpersistence in the mean, stochastic weak persistence in the mean, stochastic strong persistence in the mean and stochastic permanence of the species are established. The threshold between stochastic weak persistence in the mean and extinction is obtained. The results show that both white and colored environmental noises have sufficient effect to the survival results. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Stochastic differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sobczyk, K.
1990-01-01
This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less
NASA Astrophysics Data System (ADS)
Wang, Qingyun; Zhang, Honghui; Chen, Guanrong
2012-12-01
We study the effect of heterogeneous neuron and information transmission delay on stochastic resonance of scale-free neuronal networks. For this purpose, we introduce the heterogeneity to the specified neuron with the highest degree. It is shown that in the absence of delay, an intermediate noise level can optimally assist spike firings of collective neurons so as to achieve stochastic resonance on scale-free neuronal networks for small and intermediate αh, which plays a heterogeneous role. Maxima of stochastic resonance measure are enhanced as αh increases, which implies that the heterogeneity can improve stochastic resonance. However, as αh is beyond a certain large value, no obvious stochastic resonance can be observed. If the information transmission delay is introduced to neuronal networks, stochastic resonance is dramatically affected. In particular, the tuned information transmission delay can induce multiple stochastic resonance, which can be manifested as well-expressed maximum in the measure for stochastic resonance, appearing every multiple of one half of the subthreshold stimulus period. Furthermore, we can observe that stochastic resonance at odd multiple of one half of the subthreshold stimulus period is subharmonic, as opposed to the case of even multiple of one half of the subthreshold stimulus period. More interestingly, multiple stochastic resonance can also be improved by the suitable heterogeneous neuron. Presented results can provide good insights into the understanding of the heterogeneous neuron and information transmission delay on realistic neuronal networks.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
Stochastic effects in a seasonally forced epidemic model
NASA Astrophysics Data System (ADS)
Rozhnova, G.; Nunes, A.
2010-10-01
The interplay of seasonality, the system’s nonlinearities and intrinsic stochasticity, is studied for a seasonally forced susceptible-exposed-infective-recovered stochastic model. The model is explored in the parameter region that corresponds to childhood infectious diseases such as measles. The power spectrum of the stochastic fluctuations around the attractors of the deterministic system that describes the model in the thermodynamic limit is computed analytically and validated by stochastic simulations for large system sizes. Size effects are studied through additional simulations. Other effects such as switching between coexisting attractors induced by stochasticity often mentioned in the literature as playing an important role in the dynamics of childhood infectious diseases are also investigated. The main conclusion is that stochastic amplification, rather than these effects, is the key ingredient to understand the observed incidence patterns.
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
Kobresia pygmaea pasture degradation and its response to increasing N deposition
NASA Astrophysics Data System (ADS)
Liu, Shibin; Schleuss, Per-Marten; Kuzyakov, Yakov
2016-04-01
Kobresia pygmaea is a dominant plant species on the Tibetan Plateau covering ca. one fifth of the total area. Severe degradation by overgrazing is ongoing at K. pygmaea pastures in recent decades. Nitrogen (N) deposition is also increasingly exacerbated across the Tibetan Plateau. Up to now the response of K. pygmaea pastures with increasing degradation to N deposition is unclear. We aimed at: (1) evaluating the effect of pasture degradation on carbon (C) and N contents of soil, root, microbial biomass and leachate, (2) determining N allocation to plant, soil and microbial biomass after N addition and (3) making an estimation of N storage and loss in Kobresia pasture. We used three Kobresia root mat types varying in their degradation stages: (1) living root mats, (2) dying root mats and (3) dead root mats. We also added two levels of 15NH415NO3 solution to simulate N deposition (control: 2.5 kg N/ha; deposition 50.9 kg N/ha) and traced the 15N in the soil-plant system. Leaching of NH4+, NO3- and DON were detected by homogeneously adding distilled water to each sample and collecting the leachate afterwards. Total N content lost by leaching increased 6.5 times following the degradation from living to dead root mats. This indicated that living Kobresia effectively decreased N loss from leaching due to N uptake by plants. The microbial biomass C to N (MBC/MBN) ratio narrowed from 10.2 to 7.5 and then to 5.0 for living, dying and dead root mats, respectively. This shows the degradation K. pygmaea shift the ecosystem from a N-limited to a C-limited status for microbes. Nitrogen addition increased above-ground plant biomass (AGB) as well as its total N content in living root mat while MBC and MBN were not affected. This shows K. pygmaea is more sensitive to N addition than microorganisms. N allocation (% of total N added) by AGB, below-ground plant biomass and soil in living root mats were 22.1%, 22.7% and 17.6%, respectively. No significant effect between these parameters was identified indicating that N allocation was independent to the giving amount of N. Up to 1.86 Mg N/ha were stored in living root mat (0-5 cm). In contrast, dead and dying root mats maintained about 2.0 Mg N/ha and 2.1 Mg N/ha, respectively. N loss in leachate of living root mat regarding a precipitation of 355 mm during growing season (equal to 85% of annual precipitation) was estimated to be around 3.6 kg N/ha (3.4 kg DON and 0.2 kg NH4-N). This amount was up to 6.5 times higher in dead root mat (23.6 kg N/ha with 19.1 kg NO3-N, 4 kg DON and 0.5 kg NH4-N). Therefore, degradation of K. pygmaea significantly increased N loss via leaching, especially NO3-N loss. We conclude N deposition facilitates the growth of K. pygmaea, which may positively affect plant productivity as well as C sequestration. In the absence of K. pygmaea, however, N deposition will lead to high N loss. Key words: Nitrogen allocation, Kobresia pygmaea, above-ground biomass, microbial biomass carbon and nitrogen
Stochastic Multi-Timescale Power System Operations With Variable Wind Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hongyu; Krad, Ibrahim; Florita, Anthony
This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
Stochastic computing with biomolecular automata
Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud
2004-01-01
Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure. PMID:15215499
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2016-07-01
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Analysis of a novel stochastic SIRS epidemic model with two different saturated incidence rates
NASA Astrophysics Data System (ADS)
Chang, Zhengbo; Meng, Xinzhu; Lu, Xiao
2017-04-01
This paper presents a stochastic SIRS epidemic model with two different nonlinear incidence rates and double epidemic asymmetrical hypothesis, and we devote to develop a mathematical method to obtain the threshold of the stochastic epidemic model. We firstly investigate the boundness and extinction of the stochastic system. Furthermore, we use Ito's formula, the comparison theorem and some new inequalities techniques of stochastic differential systems to discuss persistence in mean of two diseases on three cases. The results indicate that stochastic fluctuations can suppress the disease outbreak. Finally, numerical simulations about different noise disturbance coefficients are carried out to illustrate the obtained theoretical results.
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
Sousedík, Bedřich; Elman, Howard C.
2016-04-12
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Suzuki, Yasuo; Iida, Mitsuo; Ito, Hiroaki; Nishino, Haruo; Ohmori, Toshihide; Arai, Takehiro; Yokoyama, Tadashi; Okubo, Takanori; Hibi, Toshifumi
2017-05-01
The noninferiority of pH-dependent release mesalamine (Asacol) once daily (QD) to 3 times daily (TID) administration was investigated. This was a phase 3, multicenter, randomized, double-blind, parallel-group, active-control study, with dynamic and stochastic allocation using central registration. Patients with ulcerative colitis in remission (a bloody stool score of 0, and an ulcerative colitis disease activity index of ≤2), received the study drug (Asacol 2.4 g/d) for 48 weeks. The primary efficacy endpoint of the nonrecurrence rate was assessed on the full analysis set. The noninferiority margin was 10%. Six hundred and four subjects were eligible and were allocated; 603 subjects received the study drug. The full analysis set comprised 602 subjects (QD: 301, TID: 301). Nonrecurrence rates were 88.4% in the QD and 89.6% in the TID. The difference between nonrecurrence rates was -1.3% (95% confidence interval: -6.2, 3.7), confirming noninferiority. No differences in the safety profile were observed between the two treatment groups. On post hoc analysis by integrating the QD and the TID, nonrecurrence rate with a mucosal appearance score of 0 at determination of eligibility was significantly higher than the score of 1. The mean compliance rates were 97.7% in the QD and 98.1% in the TID. QD dosing with Asacol is as effective and safe as TID for maintenance of remission in patients with ulcerative colitis. Additionally, this study indicated that maintaining a good mucosal state is the key for longer maintenance of remission.
Suzuki, Yasuo; Iida, Mitsuo; Ito, Hiroaki; Nishino, Haruo; Ohmori, Toshihide; Arai, Takehiro; Yokoyama, Tadashi; Okubo, Takanori
2017-01-01
Background: The noninferiority of pH-dependent release mesalamine (Asacol) once daily (QD) to 3 times daily (TID) administration was investigated. Methods: This was a phase 3, multicenter, randomized, double-blind, parallel-group, active-control study, with dynamic and stochastic allocation using central registration. Patients with ulcerative colitis in remission (a bloody stool score of 0, and an ulcerative colitis disease activity index of ≤2), received the study drug (Asacol 2.4 g/d) for 48 weeks. The primary efficacy endpoint of the nonrecurrence rate was assessed on the full analysis set. The noninferiority margin was 10%. Results: Six hundred and four subjects were eligible and were allocated; 603 subjects received the study drug. The full analysis set comprised 602 subjects (QD: 301, TID: 301). Nonrecurrence rates were 88.4% in the QD and 89.6% in the TID. The difference between nonrecurrence rates was −1.3% (95% confidence interval: −6.2, 3.7), confirming noninferiority. No differences in the safety profile were observed between the two treatment groups. On post hoc analysis by integrating the QD and the TID, nonrecurrence rate with a mucosal appearance score of 0 at determination of eligibility was significantly higher than the score of 1. The mean compliance rates were 97.7% in the QD and 98.1% in the TID. Conclusions: QD dosing with Asacol is as effective and safe as TID for maintenance of remission in patients with ulcerative colitis. Additionally, this study indicated that maintaining a good mucosal state is the key for longer maintenance of remission. PMID:28368909
NASA Astrophysics Data System (ADS)
Moulds, S.; Djordjevic, S.; Savic, D.
2017-12-01
The Global Change Assessment Model (GCAM), an integrated assessment model, provides insight into the interactions and feedbacks between physical and human systems. The land system component of GCAM, which simulates land use activities and the production of major crops, produces output at the subregional level which must be spatially downscaled in order to use with gridded impact assessment models. However, existing downscaling routines typically consider cropland as a homogeneous class and do not provide information about land use intensity or specific management practices such as irrigation and multiple cropping. This paper presents a spatial allocation procedure to downscale crop production data from GCAM to a spatial grid, producing a time series of maps which show the spatial distribution of specific crops (e.g. rice, wheat, maize) at four input levels (subsistence, low input rainfed, high input rainfed and high input irrigated). The model algorithm is constrained by available cropland at each time point and therefore implicitly balances extensification and intensification processes in order to meet global food demand. It utilises a stochastic approach such that an increase in production of a particular crop is more likely to occur in grid cells with a high biophysical suitability and neighbourhood influence, while a fall in production will occur more often in cells with lower suitability. User-supplied rules define the order in which specific crops are downscaled as well as allowable transitions. A regional case study demonstrates the ability of the model to reproduce historical trends in India by comparing the model output with district-level agricultural inventory data. Lastly, the model is used to predict the spatial distribution of crops globally under various GCAM scenarios.