DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fei; Huang, Yongxi
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Xie, Fei; Huang, Yongxi
2018-02-04
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Fu, Zhenghui; Wang, Han; Lu, Wentao; Guo, Huaicheng; Li, Wei
2017-12-01
Electric power system involves different fields and disciplines which addressed the economic system, energy system, and environment system. Inner uncertainty of this compound system would be an inevitable problem. Therefore, an inexact multistage fuzzy-stochastic programming (IMFSP) was developed for regional electric power system management constrained by environmental quality. A model which concluded interval-parameter programming, multistage stochastic programming, and fuzzy probability distribution was built to reflect the uncertain information and dynamic variation in the case study, and the scenarios under different credibility degrees were considered. For all scenarios under consideration, corrective actions were allowed to be taken dynamically in accordance with the pre-regulated policies and the uncertainties in reality. The results suggest that the methodology is applicable to handle the uncertainty of regional electric power management systems and help the decision makers to establish an effective development plan.
Dung Tuan Nguyen
2012-01-01
Forest harvest scheduling has been modeled using deterministic and stochastic programming models. Past models seldom address explicit spatial forest management concerns under the influence of natural disturbances. In this research study, we employ multistage full recourse stochastic programming models to explore the challenges and advantages of building spatial...
A spatial stochastic programming model for timber and core area management under risk of fires
Yu Wei; Michael Bevers; Dung Nguyen; Erin Belval
2014-01-01
Previous stochastic models in harvest scheduling seldom address explicit spatial management concerns under the influence of natural disturbances. We employ multistage stochastic programming models to explore the challenges and advantages of building spatial optimization models that account for the influences of random stand-replacing fires. Our exploratory test models...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui
Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less
Li, Yongping; Huang, Guohe
2009-03-01
In this study, a dynamic analysis approach based on an inexact multistage integer programming (IMIP) model is developed for supporting municipal solid waste (MSW) management under uncertainty. Techniques of interval-parameter programming and multistage stochastic programming are incorporated within an integer-programming framework. The developed IMIP can deal with uncertainties expressed as probability distributions and interval numbers, and can reflect the dynamics in terms of decisions for waste-flow allocation and facility-capacity expansion over a multistage context. Moreover, the IMIP can be used for analyzing various policy scenarios that are associated with different levels of economic consequences. The developed method is applied to a case study of long-term waste-management planning. The results indicate that reasonable solutions have been generated for binary and continuous variables. They can help generate desired decisions of system-capacity expansion and waste-flow allocation with a minimized system cost and maximized system reliability.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; ...
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
Multistage Stochastic Programming and its Applications in Energy Systems Modeling and Optimization
NASA Astrophysics Data System (ADS)
Golari, Mehdi
Electric energy constitutes one of the most crucial elements to almost every aspect of life of people. The modern electric power systems face several challenges such as efficiency, economics, sustainability, and reliability. Increase in electrical energy demand, distributed generations, integration of uncertain renewable energy resources, and demand side management are among the main underlying reasons of such growing complexity. Additionally, the elements of power systems are often vulnerable to failures because of many reasons, such as system limits, weak conditions, unexpected events, hidden failures, human errors, terrorist attacks, and natural disasters. One common factor complicating the operation of electrical power systems is the underlying uncertainties from the demands, supplies and failures of system components. Stochastic programming provides a mathematical framework for decision making under uncertainty. It enables a decision maker to incorporate some knowledge of the intrinsic uncertainty into the decision making process. In this dissertation, we focus on application of two-stage and multistage stochastic programming approaches to electric energy systems modeling and optimization. Particularly, we develop models and algorithms addressing the sustainability and reliability issues in power systems. First, we consider how to improve the reliability of power systems under severe failures or contingencies prone to cascading blackouts by so called islanding operations. We present a two-stage stochastic mixed-integer model to find optimal islanding operations as a powerful preventive action against cascading failures in case of extreme contingencies. Further, we study the properties of this problem and propose efficient solution methods to solve this problem for large-scale power systems. We present the numerical results showing the effectiveness of the model and investigate the performance of the solution methods. Next, we address the sustainability issue considering the integration of renewable energy resources into production planning of energy-intensive manufacturing industries. Recently, a growing number of manufacturing companies are considering renewable energies to meet their energy requirements to move towards green manufacturing as well as decreasing their energy costs. However, the intermittent nature of renewable energies imposes several difficulties in long term planning of how to efficiently exploit renewables. In this study, we propose a scheme for manufacturing companies to use onsite and grid renewable energies provided by their own investments and energy utilities as well as conventional grid energy to satisfy their energy requirements. We propose a multistage stochastic programming model and study an efficient solution method to solve this problem. We examine the proposed framework on a test case simulated based on a real-world semiconductor company. Moreover, we evaluate long-term profitability of such scheme via so called value of multistage stochastic programming.
Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.
2016-11-27
Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.
Detailed Maintenance Planning for Military Systems with Random Lead Times and Cannibalization
2014-12-01
relativement aux systèmes d’entretien. Prendre les meilleures décisions possible signifie ici de trouver un équilibre entre les coûts d’exploitation et la...Multistage Stochastic Programming: A Scenario Tree Based Approach to Planning under Uncertainty, In Sucar, L. E., Morales , E. F., and Hoey, J
Strategic planning for disaster recovery with stochastic last mile distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell Whitford; Van Hentenryck, Pascal; Coffrin, Carleton
2010-01-01
This paper considers the single commodity allocation problem (SCAP) for disaster recovery, a fundamental problem faced by all populated areas. SCAPs are complex stochastic optimization problems that combine resource allocation, warehouse routing, and parallel fleet routing. Moreover, these problems must be solved under tight runtime constraints to be practical in real-world disaster situations. This paper formalizes the specification of SCAPs and introduces a novel multi-stage hybrid-optimization algorithm that utilizes the strengths of mixed integer programming, constraint programming, and large neighborhood search. The algorithm was validated on hurricane disaster scenarios generated by Los Alamos National Laboratory using state-of-the-art disaster simulation toolsmore » and is deployed to aid federal organizations in the US.« less
Solving multistage stochastic programming models of portfolio selection with outstanding liabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, C.
1994-12-31
Models for portfolio selection in the presence of an outstanding liability have received significant attention, for example, models for pricing options. The problem may be described briefly as follows: given a set of risky securities (and a riskless security such as a bond), and given a set of cash flows, i.e., outstanding liability, to be met at some future date, determine an initial portfolio and a dynamic trading strategy for the underlying securities such that the initial cost of the portfolio is within a prescribed wealth level and the expected cash surpluses arising from trading is maximized. While the tradingmore » strategy should be self-financing, there may also be other restrictions such as leverage and short-sale constraints. Usually the treatment is limited to binomial evolution of uncertainty (of stock price), with possible extensions for developing computational bounds for multinomial generalizations. Posing as stochastic programming models of decision making, we investigate alternative efficient solution procedures under continuous evolution of uncertainty, for discrete time economies. We point out an important moment problem arising in the portfolio selection problem, the solution (or bounds) on which provides the basis for developing efficient computational algorithms. While the underlying stochastic program may be computationally tedious even for a modest number of trading opportunities (i.e., time periods), the derived algorithms may used to solve problems whose sizes are beyond those considered within stochastic optimization.« less
Linearly Adjustable International Portfolios
NASA Astrophysics Data System (ADS)
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-09-01
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
Cutting planes for the multistage stochastic unit commitment problem
Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul
2016-04-20
As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less
Cutting planes for the multistage stochastic unit commitment problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul
As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less
Economic and environmental costs of regulatory uncertainty for coal-fired power plants.
Patiño-Echeverri, Dalia; Fischbeck, Paul; Kriegler, Elmar
2009-02-01
Uncertainty about the extent and timing of CO2 emissions regulations for the electricity-generating sector exacerbates the difficulty of selecting investment strategies for retrofitting or alternatively replacing existent coal-fired power plants. This may result in inefficient investments imposing economic and environmental costs to society. In this paper, we construct a multiperiod decision model with an embedded multistage stochastic dynamic program minimizing the expected total costs of plant operation, installations, and pollution allowances. We use the model to forecast optimal sequential investment decisions of a power plant operator with and without uncertainty about future CO2 allowance prices. The comparison of the two cases demonstrates that uncertainty on future CO2 emissions regulations might cause significant economic costs and higher air emissions.
A stochastic equilibrium model for the North American natural gas market
NASA Astrophysics Data System (ADS)
Zhuang, Jifang
This dissertation is an endeavor in the field of energy modeling for the North American natural gas market using a mixed complementarity formulation combined with the stochastic programming. The genesis of the stochastic equilibrium model presented in this dissertation is the deterministic market equilibrium model developed in [Gabriel, Kiet and Zhuang, 2005]. Based on some improvements that we made to this model, including proving new existence and uniqueness results, we present a multistage stochastic equilibrium model with uncertain demand for the deregulated North American natural gas market using the recourse method of the stochastic programming. The market participants considered by the model are pipeline operators, producers, storage operators, peak gas operators, marketers and consumers. Pipeline operators are described with regulated tariffs but also involve "congestion pricing" as a mechanism to allocate scarce pipeline capacity. Marketers are modeled as Nash-Cournot players in sales to the residential and commercial sectors but price-takers in all other aspects. Consumers are represented by demand functions in the marketers' problem. Producers, storage operators and peak gas operators are price-takers consistent with perfect competition. Also, two types of the natural gas markets are included: the long-term and spot markets. Market participants make both high-level planning decisions (first-stage decisions) in the long-term market and daily operational decisions (recourse decisions) in the spot market subject to their engineering, resource and political constraints, resource constraints as well as market constraints on both the demand and the supply side, so as to simultaneously maximize their expected profits given others' decisions. The model is shown to be an instance of a mixed complementarity problem (MiCP) under minor conditions. The MiCP formulation is derived from applying the Karush-Kuhn-Tucker optimality conditions of the optimization problems faced by the market participants. Some theoretical results regarding the market prices in both markets are shown. We also illustrate the model on a representative, sample network of two production nodes, two consumption nodes with discretely distributed end-user demand and three seasons using four cases.
Sustainable infrastructure system modeling under uncertainties and dynamics
NASA Astrophysics Data System (ADS)
Huang, Yongxi
Infrastructure systems support human activities in transportation, communication, water use, and energy supply. The dissertation research focuses on critical transportation infrastructure and renewable energy infrastructure systems. The goal of the research efforts is to improve the sustainability of the infrastructure systems, with an emphasis on economic viability, system reliability and robustness, and environmental impacts. The research efforts in critical transportation infrastructure concern the development of strategic robust resource allocation strategies in an uncertain decision-making environment, considering both uncertain service availability and accessibility. The study explores the performances of different modeling approaches (i.e., deterministic, stochastic programming, and robust optimization) to reflect various risk preferences. The models are evaluated in a case study of Singapore and results demonstrate that stochastic modeling methods in general offers more robust allocation strategies compared to deterministic approaches in achieving high coverage to critical infrastructures under risks. This general modeling framework can be applied to other emergency service applications, such as, locating medical emergency services. The development of renewable energy infrastructure system development aims to answer the following key research questions: (1) is the renewable energy an economically viable solution? (2) what are the energy distribution and infrastructure system requirements to support such energy supply systems in hedging against potential risks? (3) how does the energy system adapt the dynamics from evolving technology and societal needs in the transition into a renewable energy based society? The study of Renewable Energy System Planning with Risk Management incorporates risk management into its strategic planning of the supply chains. The physical design and operational management are integrated as a whole in seeking mitigations against the potential risks caused by feedstock seasonality and demand uncertainty. Facility spatiality, time variation of feedstock yields, and demand uncertainty are integrated into a two-stage stochastic programming (SP) framework. In the study of Transitional Energy System Modeling under Uncertainty, a multistage stochastic dynamic programming is established to optimize the process of building and operating fuel production facilities during the transition. Dynamics due to the evolving technologies and societal changes and uncertainty due to demand fluctuations are the major issues to be addressed.
Design of Multistage Axial-Flow Compressors
NASA Technical Reports Server (NTRS)
Crouse, J. E.; Gorrell, W. T.
1983-01-01
Program developed for computing aerodynamic design of multistage axialflow compressor and associated blading geometry input for internal flow analysis. Aerodynamic solution gives velocity diagrams on selected streamlines of revolution at blade row edges. Program written in FORTRAN IV.
Optimizing Integrated Terminal Airspace Operations Under Uncertainty
NASA Technical Reports Server (NTRS)
Bosson, Christabelle; Xue, Min; Zelinski, Shannon
2014-01-01
In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.
SSAIS: A Program to Assess Adverse Impact in Multistage Selection Decisions
ERIC Educational Resources Information Center
De Corte, Wilfried
2004-01-01
The article describes a Windows program to estimate the expected value and sampling distribution function of the adverse impact ratio for general multistage selections. The results of the program can also be used to predict the risk that a future selection decision will result in an outcome that reflects the presence of adverse impact. The method…
Planning and processing multistage samples with a computer programMUST.
John W. Hazard; Larry E. Stewart
1974-01-01
A computer program was written to handle multistage sampling designs in insect populations. It is, however, general enough to be used for any population where the number of stages does not exceed three. The program handles three types of sampling situations, all of which assume equal probability sampling. Option 1 takes estimates of sample variances, costs, and either...
ERIC Educational Resources Information Center
Torcasso, Gina; Hilt, Lori M.
2017-01-01
Background: Suicide is a leading cause of death among youth. Suicide screening programs aim to identify mental health issues and prevent death by suicide. Objective: The present study evaluated outcomes of a multi-stage screening program implemented over 3 school years in a moderately-sized Midwestern high school. Methods: One hundred ninety-three…
Modeling sustainability in renewable energy supply chain systems
NASA Astrophysics Data System (ADS)
Xie, Fei
This dissertation aims at modeling sustainability of renewable fuel supply chain systems against emerging challenges. In particular, the dissertation focuses on the biofuel supply chain system design, and manages to develop advanced modeling framework and corresponding solution methods in tackling challenges in sustaining biofuel supply chain systems. These challenges include: (1) to integrate "environmental thinking" into the long-term biofuel supply chain planning; (2) to adopt multimodal transportation to mitigate seasonality in biofuel supply chain operations; (3) to provide strategies in hedging against uncertainty from conversion technology; and (4) to develop methodologies in long-term sequential planning of the biofuel supply chain under uncertainties. All models are mixed integer programs, which also involves multi-objective programming method and two-stage/multistage stochastic programming methods. In particular for the long-term sequential planning under uncertainties, to reduce the computational challenges due to the exponential expansion of the scenario tree, I also developed efficient ND-Max method which is more efficient than CPLEX and Nested Decomposition method. Through result analysis of four independent studies, it is found that the proposed modeling frameworks can effectively improve the economic performance, enhance environmental benefits and reduce risks due to systems uncertainties for the biofuel supply chain systems.
Multistage Planetary Power Transmissions
NASA Technical Reports Server (NTRS)
Hadden, G. B.; Dyba, G. J.; Ragen, M. A.; Kleckner, R. J.; Sheynin, L.
1986-01-01
PLANETSYS simulates thermomechanical performance of multistage planetary performance of multistage planetary power transmission. Two versions of code developed, SKF version and NASA version. Major function of program: compute performance characteristics of planet bearing for any of six kinematic inversions. PLANETSYS solves heat-balance equations for either steadystate or transient thermal conditions, and produces temperature maps for mechanical system.
Multistage adsorption of diffusing macromolecules and viruses
NASA Astrophysics Data System (ADS)
Chou, Tom; D'Orsogna, Maria R.
2007-09-01
We derive the equations that describe adsorption of diffusing particles onto a surface followed by additional surface kinetic steps before being transported across the interface. Multistage surface kinetics occurs during membrane protein insertion, cell signaling, and the infection of cells by virus particles. For example, viral entry into healthy cells is possible only after a series of receptor and coreceptor binding events occurs at the cellular surface. We couple the diffusion of particles in the bulk phase with the multistage surface kinetics and derive an effective, integrodifferential boundary condition that contains a memory kernel embodying the delay induced by the surface reactions. This boundary condition takes the form of a singular perturbation problem in the limit where particle-surface interactions are short ranged. Moreover, depending on the surface kinetics, the delay kernel induces a nonmonotonic, transient replenishment of the bulk particle concentration near the interface. The approach generalizes that of Ward and Tordai [J. Chem. Phys. 14, 453 (1946)] and Diamant and Andelman [Colloids Surf. A 183-185, 259 (2001)] to include surface kinetics, giving rise to qualitatively new behaviors. Our analysis also suggests a simple scheme by which stochastic surface reactions may be coupled to deterministic bulk diffusion.
Billard, L; Dayananda, P W A
2014-03-01
Stochastic population processes have received a lot of attention over the years. One approach focuses on compartmental modeling. Billard and Dayananda (2012) developed one such multi-stage model for epidemic processes in which the possibility that individuals can die at any stage from non-disease related causes was also included. This extra feature is of particular interest to the insurance and health-care industries among others especially when the epidemic is HIV/AIDS. Rather than working with numbers of individuals in each stage, they obtained distributional results dealing with the waiting time any one individual spent in each stage given the initial stage. In this work, the impact of the HIV/AIDS epidemic on several functions relevant to these industries (such as adjustments to premiums) is investigated. Theoretical results are derived, followed by a numerical study. Copyright © 2014 Elsevier Inc. All rights reserved.
Mars integrated transportation system multistage Mars mission
NASA Technical Reports Server (NTRS)
1991-01-01
In accordance with the objective of the Mars Integrated Transport System (MITS) program, the Multistage Mars Mission (MSMM) design team developed a profile for a manned mission to Mars. The purpose of the multistage mission is to send a crew of five astronauts to the martian surface by the year 2019. The mission continues man's eternal quest for exploration of new frontiers. This mission has a scheduled duration of 426 days that includes experimentation en route as well as surface exploration and experimentation. The MSMM is also designed as a foundation for a continuing program leading to the colonization of the planet Mars.
Pelosse, Perrine; Kribs-Zaleta, Christopher M; Ginoux, Marine; Rabinovich, Jorge E; Gourbière, Sébastien; Menu, Frédéric
2013-01-01
Insects are known to display strategies that spread the risk of encountering unfavorable conditions, thereby decreasing the extinction probability of genetic lineages in unpredictable environments. To what extent these strategies influence the epidemiology and evolution of vector-borne diseases in stochastic environments is largely unknown. In triatomines, the vectors of the parasite Trypanosoma cruzi, the etiological agent of Chagas' disease, juvenile development time varies between individuals and such variation most likely decreases the extinction risk of vector populations in stochastic environments. We developed a simplified multi-stage vector-borne SI epidemiological model to investigate how vector risk-spreading strategies and environmental stochasticity influence the prevalence and evolution of a parasite. This model is based on available knowledge on triatomine biodemography, but its conceptual outcomes apply, to a certain extent, to other vector-borne diseases. Model comparisons between deterministic and stochastic settings led to the conclusion that environmental stochasticity, vector risk-spreading strategies (in particular an increase in the length and variability of development time) and their interaction have drastic consequences on vector population dynamics, disease prevalence, and the relative short-term evolution of parasite virulence. Our work shows that stochastic environments and associated risk-spreading strategies can increase the prevalence of vector-borne diseases and favor the invasion of more virulent parasite strains on relatively short evolutionary timescales. This study raises new questions and challenges in a context of increasingly unpredictable environmental variations as a result of global climate change and human interventions such as habitat destruction or vector control.
Pelosse, Perrine; Kribs-Zaleta, Christopher M.; Ginoux, Marine; Rabinovich, Jorge E.; Gourbière, Sébastien; Menu, Frédéric
2013-01-01
Insects are known to display strategies that spread the risk of encountering unfavorable conditions, thereby decreasing the extinction probability of genetic lineages in unpredictable environments. To what extent these strategies influence the epidemiology and evolution of vector-borne diseases in stochastic environments is largely unknown. In triatomines, the vectors of the parasite Trypanosoma cruzi, the etiological agent of Chagas’ disease, juvenile development time varies between individuals and such variation most likely decreases the extinction risk of vector populations in stochastic environments. We developed a simplified multi-stage vector-borne SI epidemiological model to investigate how vector risk-spreading strategies and environmental stochasticity influence the prevalence and evolution of a parasite. This model is based on available knowledge on triatomine biodemography, but its conceptual outcomes apply, to a certain extent, to other vector-borne diseases. Model comparisons between deterministic and stochastic settings led to the conclusion that environmental stochasticity, vector risk-spreading strategies (in particular an increase in the length and variability of development time) and their interaction have drastic consequences on vector population dynamics, disease prevalence, and the relative short-term evolution of parasite virulence. Our work shows that stochastic environments and associated risk-spreading strategies can increase the prevalence of vector-borne diseases and favor the invasion of more virulent parasite strains on relatively short evolutionary timescales. This study raises new questions and challenges in a context of increasingly unpredictable environmental variations as a result of global climate change and human interventions such as habitat destruction or vector control. PMID:23951018
ERIC Educational Resources Information Center
Arango, Lisa Lewis; Kurtines, William M.; Montgomery, Marilyn J.; Ritchie, Rachel
2008-01-01
The study reported in this article, a Multi-Stage Longitudinal Comparative Design Stage II evaluation conducted as a planned preliminary efficacy evaluation (psychometric evaluation of measures, short-term controlled outcome studies, etc.) of the Changing Lives Program (CLP), provided evidence for the reliability and validity of the qualitative…
NASA Astrophysics Data System (ADS)
Wang, Meng; Zhang, Huaiqiang; Zhang, Kan
2017-10-01
Focused on the circumstance that the equipment using demand in the short term and the development demand in the long term should be made overall plans and took into consideration in the weapons portfolio planning and the practical problem of the fuzziness in the definition of equipment capacity demand. The expression of demand is assumed to be an interval number or a discrete number. With the analysis method of epoch-era, a long planning cycle is broke into several short planning cycles with different demand value. The multi-stage stochastic programming model is built aimed at maximize long-term planning cycle demand under the constraint of budget, equipment development time and short planning cycle demand. The scenario tree is used to discretize the interval value of the demand, and genetic algorithm is designed to solve the problem. At last, a case is studied to demonstrate the feasibility and effectiveness of the proposed mode.
SOX OUT ON A LIMB (LIMESTONE INJECTION MULTISTAGE BURNER)
The paper describes the most recent results from the Limestone Injection Multistage Burner (LIMB) program, covering results from the wall-fired demonstration. Tests were conducted to determine the efficacy of commercial calcium hydroxide (Ca(OH)2) and of calcium-lignosulfonate-mo...
DEVELOPMENTS IN LIMB (LIMESTONE INJECTION MULTISTAGE BURNER) TECHNOLOGY
The paper describes the most recent results from the Limestone Injection Multistage Burner (LIMB) program, results from the wall-fired demonstration. Tests were conducted to determine the efficacy of commercial calcium hydroxide--Ca(OH)2--supplied by Marblehead Lime Co. and of ca...
Automated Simultaneous Assembly for Multistage Testing
ERIC Educational Resources Information Center
Breithaupt, Krista; Ariel, Adelaide; Veldkamp, Bernard P.
2005-01-01
This article offers some solutions used in the assembly of the computerized Uniform Certified Public Accountancy (CPA) licensing examination as practical alternatives for operational programs producing large numbers of forms. The Uniform CPA examination was offered as an adaptive multistage test (MST) beginning in April of 2004. Examples of…
Test Information Targeting Strategies for Adaptive Multistage Testing Designs.
ERIC Educational Resources Information Center
Luecht, Richard M.; Burgin, William
Adaptive multistage testlet (MST) designs appear to be gaining popularity for many large-scale computer-based testing programs. These adaptive MST designs use a modularized configuration of preconstructed testlets and embedded score-routing schemes to prepackage different forms of an adaptive test. The conditional information targeting (CIT)…
NASA Astrophysics Data System (ADS)
Sakellariou, J. S.; Fassois, S. D.
2017-01-01
The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glueckstern, P.; Wilson, J.V.; Reed, S.A.
1976-06-01
Design and cost modifications were made to ORNL's Computer Programs MSF-21 and VTE-21 originally developed for the rapid calculation and design optimization of multistage flash (MSF) and multieffect vertical tube evaporator (VTE) desalination plants. The modifications include additional design options to make possible the evaluation of desalting plants based on current technology (the original programs were based on conceptual designs applying advanced and not yet proven technological developments and design features) and new materials and equipment costs updated to mid-1975.
Modeling of Unsteady Three-dimensional Flows in Multistage Machines
NASA Technical Reports Server (NTRS)
Hall, Kenneth C.; Pratt, Edmund T., Jr.; Kurkov, Anatole (Technical Monitor)
2003-01-01
Despite many years of development, the accurate and reliable prediction of unsteady aerodynamic forces acting on turbomachinery blades remains less than satisfactory, especially when viewed next to the great success investigators have had in predicting steady flows. Hall and Silkowski (1997) have proposed that one of the main reasons for the discrepancy between theory and experiment and/or industrial experience is that many of the current unsteady aerodynamic theories model a single blade row in an infinitely long duct, ignoring potentially important multistage effects. However, unsteady flows are made up of acoustic, vortical, and entropic waves. These waves provide a mechanism for the rotors and stators of multistage machines to communicate with one another. In other words, wave behavior makes unsteady flows fundamentally a multistage (and three-dimensional) phenomenon. In this research program, we have has as goals (1) the development of computationally efficient computer models of the unsteady aerodynamic response of blade rows embedded in a multistage machine (these models will ultimately be capable of analyzing three-dimensional viscous transonic flows), and (2) the use of these computer codes to study a number of important multistage phenomena.
Partial ASL extensions for stochastic programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gay, David
2010-03-31
partially completed extensions for stochastic programming to the AMPL/solver interface library (ASL).modeling and experimenting with stochastic recourse problems. This software is not primarily for military applications
Alex, Rani; Kunniyoor Cheemani, Raghavan; Thomas, Naicy
2013-11-01
A stochastic frontier production function was employed to measure technical efficiency and its determinants in smallholder Malabari goat production units in Kerala, India. Data were obtained from 100 goat farmers in northern Kerala, selected using multistage random sampling. The parameters of the stochastic frontier production function were estimated using the maximum likelihood method. Cost and return analysis showed that the major expenditure was feed and fodder, and veterinary expenses were secondary. The chief returns were the sale of live animals, milk and manure. Individual farm technical efficiency ranged from 0.34 to 0.97 with a mean of 0.88. The study found herd size (number of animal units) and centre (locality of farm) significantly affected technical efficiency, but sex of farmer, education, land size and family size did not. Technical efficiency decreased as herd size increased; half the units with five or more adult animals had technical efficiency below 60 %.
NASA Technical Reports Server (NTRS)
Wisler, D. C.
1980-01-01
The objective of the program is to develop rear stage blading designs that have lower losses in their endwall boundary layer regions. The overall technical approach in this efficiency improvement program utilized General Electric's Low Speed Research Compressor as the principal investigative tool. Tests were conducted in two ways: using four identical stages of blading so that test data would be obtained in a true multistage environment and using a single stage of blading so that comparison with the multistage test results could be made.
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
Stochastic Semidefinite Programming: Applications and Algorithms
2012-03-03
doi: 2011/09/07 13:38:21 13 TOTAL: 1 Number of Papers published in non peer-reviewed journals: Baha M. Alzalg and K. A. Ariyawansa, Stochastic...symmetric programming over integers. International Conference on Scientific Computing, Las Vegas, Nevada, July 18--21, 2011. Baha M. Alzalg. On recent...Proceeding publications (other than abstracts): PaperReceived Baha M. Alzalg, K. A. Ariyawansa. Stochastic mixed integer second-order cone programming
Biologically based multistage modeling of radiation effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
William Hazelton; Suresh Moolgavkar; E. Georg Luebeck
2005-08-30
This past year we have made substantial progress in modeling the contribution of homeostatic regulation to low-dose radiation effects and carcinogenesis. We have worked to refine and apply our multistage carcinogenesis models to explicitly incorporate cell cycle states, simple and complex damage, checkpoint delay, slow and fast repair, differentiation, and apoptosis to study the effects of low-dose ionizing radiation in mouse intestinal crypts, as well as in other tissues. We have one paper accepted for publication in ''Advances in Space Research'', and another manuscript in preparation describing this work. I also wrote a chapter describing our combined cell-cycle and multistagemore » carcinogenesis model that will be published in a book on stochastic carcinogenesis models edited by Wei-Yuan Tan. In addition, we organized and held a workshop on ''Biologically Based Modeling of Human Health Effects of Low dose Ionizing Radiation'', July 28-29, 2005 at Fred Hutchinson Cancer Research Center in Seattle, Washington. We had over 20 participants, including Mary Helen Barcellos-Hoff as keynote speaker, talks by most of the low-dose modelers in the DOE low-dose program, experimentalists including Les Redpath (and Mary Helen), Noelle Metting from DOE, and Tony Brooks. It appears that homeostatic regulation may be central to understanding low-dose radiation phenomena. The primary effects of ionizing radiation (IR) are cell killing, delayed cell cycling, and induction of mutations. However, homeostatic regulation causes cells that are killed or damaged by IR to eventually be replaced. Cells with an initiating mutation may have a replacement advantage, leading to clonal expansion of these initiated cells. Thus we have focused particularly on modeling effects that disturb homeostatic regulation as early steps in the carcinogenic process. There are two primary considerations that support our focus on homeostatic regulation. First, a number of epidemiologic studies using multistage carcinogenesis models that incorporate the ''initiation, promotion, and malignant conversion'' paradigm of carcinogenesis are indicating that promotion of initiated cells is the most important cellular mechanism driving the shape of the age specific hazard for many types of cancer. Second, we have realized that many of the genes that are modified in early stages of the carcinogenic process contribute to one or more of four general cellular pathways that confer a promotional advantage to cells when these pathways are disrupted.« less
Particle swarm optimization of ascent trajectories of multistage launch vehicles
NASA Astrophysics Data System (ADS)
Pontani, Mauro
2014-02-01
Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state and costate components, the coast duration, and the upper stage thrust duration. In addition, a simple approach is introduced and successfully applied with the purpose of satisfying exactly the path constraint related to the maximum dynamical pressure in the atmospheric phase. The basic version of the swarming technique, which is used in this research, is extremely simple and easy to program. Nevertheless, the algorithm proves to be capable of yielding the optimal rocket trajectory with a very satisfactory numerical accuracy.
NASA Astrophysics Data System (ADS)
Carpentier, Pierre-Luc
In this thesis, we consider the midterm production planning problem (MTPP) of hydroelectricity generation under uncertainty. The aim of this problem is to manage a set of interconnected hydroelectric reservoirs over several months. We are particularly interested in high dimensional reservoir systems that are operated by large hydroelectricity producers such as Hydro-Quebec. The aim of this thesis is to develop and evaluate different decomposition methods for solving the MTPP under uncertainty. This thesis is divided in three articles. The first article demonstrates the applicability of the progressive hedging algorithm (PHA), a scenario decomposition method, for managing hydroelectric reservoirs with multiannual storage capacity under highly variable operating conditions in Canada. The PHA is a classical stochastic optimization method designed to solve general multistage stochastic programs defined on a scenario tree. This method works by applying an augmented Lagrangian relaxation on non-anticipativity constraints (NACs) of the stochastic program. At each iteration of the PHA, a sequence of subproblems must be solved. Each subproblem corresponds to a deterministic version of the original stochastic program for a particular scenario in the scenario tree. Linear and a quadratic terms must be included in subproblem's objective functions to penalize any violation of NACs. An important limitation of the PHA is due to the fact that the number of subproblems to be solved and the number of penalty terms increase exponentially with the branching level in the tree. This phenomenon can make the application of the PHA particularly difficult when the scenario tree covers several tens of time periods. Another important limitation of the PHA is caused by the fact that the difficulty level of NACs generally increases as the variability of scenarios increases. Consequently, applying the PHA becomes particularly challenging in hydroclimatic regions that are characterized by a high level of seasonal and interannual variability. These two types of limitations can slow down the algorithm's convergence rate and increase the running time per iteration. In this study, we apply the PHA on Hydro-Quebec's power system over a 92-week planning horizon. Hydrologic uncertainty is represented by a scenario tree containing 6 branching stages and 1,635 nodes. The PHA is especially well-suited for this particular application given that the company already possess a deterministic optimization model to solve the MTPP. The second article presents a new approach which enhances the performance of the PHA for solving general Mstochastic programs. The proposed method works by applying a multiscenario decomposition scheme on the stochastic program. Our heuristic method aims at constructing an optimal partition of the scenario set by minimizing the number of NACs on which an augmented Lagrangean relaxation must be applied. Each subproblem is a stochastic program defined on a group of scenarios. NACs linking scenarios sharing a common group are represented implicitly in subproblems by using a group-node system index instead of the traditional scenario-time index system. Only the NACs that link the different scenario groups are represented explicitly and relaxed. The proposed method is evaluated numerically on an hydroelectric reservoir management problem in Quebec. The results of this experiment show that our method has several advantages. Firstly, it allows to reduce the running time per iteration of the PHA by reducing the number of penalty terms that are included in the objective function and by reducing the amount of duplicated constraints and variables. In turn, this allows to reduce the running time per iteration of the algorithm. Secondly, it allows to increase the algorithm's convergence rate by reducing the variability of intermediary solutions at duplicated tree nodes. Thirdly, our approach reduces the amount of random-access memory (RAM) required for storing Lagrange multipliers associated with relaxed NACs. The third article presents an extension of the L-Shaped method designed specifically for managing hydroelectric reservoir systems with a high storage capacity. The method proposed in this paper enables to consider a higher branching level than conventional decomposition method enables. To achieve this, we assume that the stochastic process driving random parameters has a memory loss at time period t = tau. Because of this assumption, the scenario tree possess a special symmetrical structure at the second stage (t > tau). We exploit this feature using a two-stage Benders decomposition method. Each decomposition stage covers several consecutive time periods. The proposed method works by constructing a convex and piecewise linear recourse function that represents the expected cost at the second stage in the master problem. The subproblem and the master problem are stochastic program defined on scenario subtrees and can be solved using a conventional decomposition method or directly. We test the proposed method on an hydroelectric power system in Quebec over a 104-week planning horizon. (Abstract shortened by UMI.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pradhan, V.R.; Lee, L.K.; Stalzer, R.H.
1995-12-31
The development of Catalytic Multi-Stage Liquefaction (CMSL) at HTI has focused on both bituminous and sub-bituminous coals using laboratory, bench and PDU scale operations. The crude oil equivalent cost of liquid fuels from coal has been curtailed to about $30 per barrel, thus achieving over 30% reduction in the price that was evaluated for the liquefaction technologies demonstrated in the late seventies and early eighties. Contrary to the common belief, the new generation of catalytic multistage coal liquefaction process is environmentally very benign and can produce clean, premium distillates with a very low (<10ppm) heteroatoms content. The HTI Staff hasmore » been involved over the years in process development and has made significant improvements in the CMSL processing of coals. A 24 month program (extended to September 30, 1995) to study novel concepts, using a continuous bench scale Catalytic Multi-Stage unit (30kg coal/day), has been initiated since December, 1992. This program consists of ten bench-scale operations supported by Laboratory Studies, Modelling, Process Simulation and Economic Assessments. The Catalytic Multi-Stage Liquefaction is a continuation of the second generation yields using a low/high temperature approach. This paper covers work performed between October 1994- August 1995, especially results obtained from the microautoclave support activities and the bench-scale operations for runs CMSL-08 and CMSL-09, during which, coal and the plastic components for municipal solid wastes (MSW) such as high density polyethylene (HDPE)m, polypropylene (PP), polystyrene (PS), and polythylene terphthlate (PET) were coprocessed.« less
Stochastic Dynamic Mixed-Integer Programming (SD-MIP)
2015-05-05
stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g
Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L
2015-02-01
Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.
An Adaptive Technique for a Redundant-Sensor Navigation System. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chien, T. T.
1972-01-01
An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. The gyro navigation system is modeled as a Gauss-Markov process, with degradation modes defined as changes in characteristics specified by parameters associated with the model. The adaptive system is formulated as a multistage stochastic process: (1) a detection system, (2) an identification system and (3) a compensation system. It is shown that the sufficient statistics for the partially observable process in the detection and identification system is the posterior measure of the state of degradation, conditioned on the measurement history.
Computer programs for axial flow compressor design
NASA Technical Reports Server (NTRS)
Carmody, R. H.; Creveling, H. F.
1969-01-01
Four computer programs examine effects of design parameters and indicate areas for research of multistage axial flow compressors. The programs provide information on velocity diagrams and stage-by-stage performance calculation, radial equilibrium of flow, radial distribution of total pressure, and off-design performance calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendriks, R.V.; Nolan, P.S.
1987-01-01
The paper describes and discusses the key design features of the retrofit of EPA's Limestone Injection Multistage Burner (LIMB) system to an operating, wall-fired utility boiler at Ohio Edison's Edgewater Station. It further describes results of the pertinent projects in EPA's LIMB program and shows how these results were used as the basis for the design of the system. The full-scale demonstration is expected to prove the effectiveness and cost of the LIMB concept for use on large-scale utility boilers. The equipment is now being installed at Edgewater, with system start-up scheduled for May 1987.
NASA Technical Reports Server (NTRS)
Wisler, D. C.
1981-01-01
The core compressor exit stage study program develops rear stage blading designs that have lower losses in their endwall boundary layer regions. The test data and performance results for the best stage configuration consisting of Rotor-B running with Stator-B are described. The technical approach in this efficiency improvement program utilizes a low speed research compressor. Tests were conducted in two ways: (1) to use four identical stages of blading to obtain test data in a true multistage environment and (2) to use a single stage of blading to compare with the multistage test results. The effects of increased rotor tip clearances and circumferential groove casing treatment are evaluated.
Development of Effective Teacher Program: Teamwork Building Program for Thailand's Municipal Schools
ERIC Educational Resources Information Center
Chantathai, Pimpka; Tesaputa, Kowat; Somprach, Kanokorn
2015-01-01
This research is aimed to formulate the effective teacher teamwork program in municipal schools in Thailand. Primary survey on current situation and problem was conducted to develop the plan to suggest potential programs. Samples were randomly selected from municipal schools by using multi-stage sampling method in order to investigate their…
Neural networks for continuous online learning and control.
Choy, Min Chee; Srinivasan, Dipti; Cheu, Ruey Long
2006-11-01
This paper proposes a new hybrid neural network (NN) model that employs a multistage online learning process to solve the distributed control problem with an infinite horizon. Various techniques such as reinforcement learning and evolutionary algorithm are used to design the multistage online learning process. For this paper, the infinite horizon distributed control problem is implemented in the form of real-time distributed traffic signal control for intersections in a large-scale traffic network. The hybrid neural network model is used to design each of the local traffic signal controllers at the respective intersections. As the state of the traffic network changes due to random fluctuation of traffic volumes, the NN-based local controllers will need to adapt to the changing dynamics in order to provide effective traffic signal control and to prevent the traffic network from becoming overcongested. Such a problem is especially challenging if the local controllers are used for an infinite horizon problem where online learning has to take place continuously once the controllers are implemented into the traffic network. A comprehensive simulation model of a section of the Central Business District (CBD) of Singapore has been developed using PARAMICS microscopic simulation program. As the complexity of the simulation increases, results show that the hybrid NN model provides significant improvement in traffic conditions when evaluated against an existing traffic signal control algorithm as well as a new, continuously updated simultaneous perturbation stochastic approximation-based neural network (SPSA-NN). Using the hybrid NN model, the total mean delay of each vehicle has been reduced by 78% and the total mean stoppage time of each vehicle has been reduced by 84% compared to the existing traffic signal control algorithm. This shows the efficacy of the hybrid NN model in solving large-scale traffic signal control problem in a distributed manner. Also, it indicates the possibility of using the hybrid NN model for other applications that are similar in nature as the infinite horizon distributed control problem.
Three-dimensional turbopump flowfield analysis
NASA Technical Reports Server (NTRS)
Sharma, O. P.; Belford, K. A.; Ni, R. H.
1992-01-01
A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.
Automated Flight Routing Using Stochastic Dynamic Programming
NASA Technical Reports Server (NTRS)
Ng, Hok K.; Morando, Alex; Grabbe, Shon
2010-01-01
Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.
A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation
Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir
2016-05-01
We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
MSW Time to Tumor Model and Supporting Documentation
The multistage Weibull (MSW) time-to-tumor model and related documentation were developed principally (but not exclusively) for conducting time-to-tumor analyses to support risk assessments under the IRIS program. These programs and related docum...
Portfolio Optimization with Stochastic Dividends and Stochastic Volatility
ERIC Educational Resources Information Center
Varga, Katherine Yvonne
2015-01-01
We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…
Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1997-01-01
A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.
NASA Astrophysics Data System (ADS)
Lali, Mehdi
2009-03-01
A comprehensive computer program is designed in MATLAB to analyze, design and optimize the propulsion, dynamics, thermodynamics, and kinematics of any serial multi-staging rocket for a set of given data. The program is quite user-friendly. It comprises two main sections: "analysis and design" and "optimization." Each section has a GUI (Graphical User Interface) in which the rocket's data are entered by the user and by which the program is run. The first section analyzes the performance of the rocket that is previously devised by the user. Numerous plots and subplots are provided to display the performance of the rocket. The second section of the program finds the "optimum trajectory" via billions of iterations and computations which are done through sophisticated algorithms using numerical methods and incremental integrations. Innovative techniques are applied to calculate the optimal parameters for the engine and designing the "optimal pitch program." This computer program is stand-alone in such a way that it calculates almost every design parameter in regards to rocket propulsion and dynamics. It is meant to be used for actual launch operations as well as educational and research purposes.
Engineered Resilient Systems: Knowledge Capture and Transfer
2014-08-29
development, but the work has not progressed significantly. 71 Peter Kall and Stein W. Wallace, Stochastic Programming, John Wiley & Sons, Chichester, 1994...John Wiley and Sons: Hoboken, 2008. Peter Kall and Stein W. Wallace, Stochastic Programming, John Wiley & Sons, Chichester, 1994. Rhodes, D.H., Lamb
Evaluation of the Alaska Native Science & Engineering Program (ANSEP). Research Report
ERIC Educational Resources Information Center
Bernstein, Hamutal; Martin, Carlos; Eyster, Lauren; Anderson, Theresa; Owen, Stephanie; Martin-Caughey, Amanda
2015-01-01
The Urban Institute conducted an implementation and participant-outcomes evaluation of the Alaska Native Science & Engineering Program (ANSEP). ANSEP is a multi-stage initiative designed to prepare and support Alaska Native students from middle school through graduate school to succeed in science, technology, engineering, and math (STEM)…
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Stochastic Feedforward Control Technique
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1990-01-01
Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.
Approximate Dynamic Programming and Aerial Refueling
2007-06-01
by two Army Air Corps de Havilland DH -4Bs (9). While crude by modern standards, the passing of hoses be- tween planes is effectively the same approach...incorporating stochastic data sets. . . . . . . . . . . 106 55 Total Cost Stochastically Trained Simulations versus Deterministically Trained Simulations...incorporating stochastic data sets. 106 To create meaningful results when testing stochastic data, the data sets are av- eraged so that conclusions are not
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction
2016-02-25
Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction We have completed a short program of theoretical research...on dimensional reduction and approximation of models based on quantum stochastic differential equations. Our primary results lie in the area of...2211 quantum probability, quantum stochastic differential equations REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR
NASA Technical Reports Server (NTRS)
Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.
1987-01-01
A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.
Accelerating numerical solution of stochastic differential equations with CUDA
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2010-01-01
Numerical integration of stochastic differential equations is commonly used in many branches of science. In this paper we present how to accelerate this kind of numerical calculations with popular NVIDIA Graphics Processing Units using the CUDA programming environment. We address general aspects of numerical programming on stream processors and illustrate them by two examples: the noisy phase dynamics in a Josephson junction and the noisy Kuramoto model. In presented cases the measured speedup can be as high as 675× compared to a typical CPU, which corresponds to several billion integration steps per second. This means that calculations which took weeks can now be completed in less than one hour. This brings stochastic simulation to a completely new level, opening for research a whole new range of problems which can now be solved interactively. Program summaryProgram title: SDE Catalogue identifier: AEFG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu GPL v3 No. of lines in distributed program, including test data, etc.: 978 No. of bytes in distributed program, including test data, etc.: 5905 Distribution format: tar.gz Programming language: CUDA C Computer: any system with a CUDA-compatible GPU Operating system: Linux RAM: 64 MB of GPU memory Classification: 4.3 External routines: The program requires the NVIDIA CUDA Toolkit Version 2.0 or newer and the GNU Scientific Library v1.0 or newer. Optionally gnuplot is recommended for quick visualization of the results. Nature of problem: Direct numerical integration of stochastic differential equations is a computationally intensive problem, due to the necessity of calculating multiple independent realizations of the system. We exploit the inherent parallelism of this problem and perform the calculations on GPUs using the CUDA programming environment. The GPU's ability to execute hundreds of threads simultaneously makes it possible to speed up the computation by over two orders of magnitude, compared to a typical modern CPU. Solution method: The stochastic Runge-Kutta method of the second order is applied to integrate the equation of motion. Ensemble-averaged quantities of interest are obtained through averaging over multiple independent realizations of the system. Unusual features: The numerical solution of the stochastic differential equations in question is performed on a GPU using the CUDA environment. Running time: < 1 minute
The numerical simulation of a high-speed axial flow compressor
NASA Technical Reports Server (NTRS)
Mulac, Richard A.; Adamczyk, John J.
1991-01-01
The advancement of high-speed axial-flow multistage compressors is impeded by a lack of detailed flow-field information. Recent development in compressor flow modeling and numerical simulation have the potential to provide needed information in a timely manner. The development of a computer program is described to solve the viscous form of the average-passage equation system for multistage turbomachinery. Programming issues such as in-core versus out-of-core data storage and CPU utilization (parallelization, vectorization, and chaining) are addressed. Code performance is evaluated through the simulation of the first four stages of a five-stage, high-speed, axial-flow compressor. The second part addresses the flow physics which can be obtained from the numerical simulation. In particular, an examination of the endwall flow structure is made, and its impact on blockage distribution assessed.
ERIC Educational Resources Information Center
Bernstein, Hamutal; Martin, Carlos; Eyster, Lauren; Anderson, Theresa; Owen, Stephanie; Martin-Caughey, Amanda
2015-01-01
The Urban Institute conducted an implementation and participant-outcomes evaluation of the Alaska Native Science & Engineering Program (ANSEP). ANSEP is a multi-stage initiative designed to prepare and support Alaska Native students from middle school through graduate school to succeed in science, technology, engineering, and math (STEM)…
Automated Simultaneous Assembly of Multistage Testlets for a High-Stakes Licensing Examination
ERIC Educational Resources Information Center
Breithaupt, Krista; Hare, Donovan R.
2007-01-01
Many challenges exist for high-stakes testing programs offering continuous computerized administration. The automated assembly of test questions to exactly meet content and other requirements, provide uniformity, and control item exposure can be modeled and solved by mixed-integer programming (MIP) methods. A case study of the computerized…
ERIC Educational Resources Information Center
Duwe, Grant; Kerschner, Deborah
2008-01-01
Using a retrospective, quasiexperimental design, this study evaluates Minnesota's Challenge Incarceration Program (CIP), examining whether it has lowered recidivism and saved money. In addition to utilizing a lengthy follow-up period and multiple measures of recidivism and participation, a multistage sampling design was employed to create a…
Kramer, Marlene; Halfer, Diana; Maguire, Pat; Schmalenberg, Claudia
2012-03-01
The objective of the study was to examine effects of nurse-confirmed healthy unit work environments and multistage nurse residency programs (NRPs) on retention rates of newly licensed RNs (NLRNs). Establishing a culture of retention and healthy clinical nurse practice environments are two major challenges confronting nurse leaders today. Nurse residency programs are a major component of NLRN work environments and have been shown to be effective in abating nurse turnover. Sample for this study consisted of 5,316 new graduates in initial RN roles in 28 Magnet® hospitals. There were no differences in retention rates by education or patient population on clinical unit. NLRN retention rate was higher in community than in academic hospitals. More than half of NLRNs were placed on units with very healthy work environments. Newly licensed RNs on units with work environments needing improvement resigned at a significantly higher rate than did other NLRNs. The quality of clinical unit work environments is the most important factor in NLRN retention.
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
NASA Astrophysics Data System (ADS)
Feskov, Serguei V.; Ivanov, Anatoly I.
2018-03-01
An approach to the construction of diabatic free energy surfaces (FESs) for ultrafast electron transfer (ET) in a supramolecule with an arbitrary number of electron localization centers (redox sites) is developed, supposing that the reorganization energies for the charge transfers and shifts between all these centers are known. Dimensionality of the coordinate space required for the description of multistage ET in this supramolecular system is shown to be equal to N - 1, where N is the number of the molecular centers involved in the reaction. The proposed algorithm of FES construction employs metric properties of the coordinate space, namely, relation between the solvent reorganization energy and the distance between the two FES minima. In this space, the ET reaction coordinate zn n' associated with electron transfer between the nth and n'th centers is calculated through the projection to the direction, connecting the FES minima. The energy-gap reaction coordinates zn n' corresponding to different ET processes are not in general orthogonal so that ET between two molecular centers can create nonequilibrium distribution, not only along its own reaction coordinate but along other reaction coordinates too. This results in the influence of the preceding ET steps on the kinetics of the ensuing ET. It is important for the ensuing reaction to be ultrafast to proceed in parallel with relaxation along the ET reaction coordinates. Efficient algorithms for numerical simulation of multistage ET within the stochastic point-transition model are developed. The algorithms are based on the Brownian simulation technique with the recrossing-event detection procedure. The main advantages of the numerical method are (i) its computational complexity is linear with respect to the number of electronic states involved and (ii) calculations can be naturally parallelized up to the level of individual trajectories. The efficiency of the proposed approach is demonstrated for a model supramolecular system involving four redox centers.
Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm
1978-09-01
deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research
Stochastic Optimization For Water Resources Allocation
NASA Astrophysics Data System (ADS)
Yamout, G.; Hatfield, K.
2003-12-01
For more than 40 years, water resources allocation problems have been addressed using deterministic mathematical optimization. When data uncertainties exist, these methods could lead to solutions that are sub-optimal or even infeasible. While optimization models have been proposed for water resources decision-making under uncertainty, no attempts have been made to address the uncertainties in water allocation problems in an integrated approach. This paper presents an Integrated Dynamic, Multi-stage, Feedback-controlled, Linear, Stochastic, and Distributed parameter optimization approach to solve a problem of water resources allocation. It attempts to capture (1) the conflict caused by competing objectives, (2) environmental degradation produced by resource consumption, and finally (3) the uncertainty and risk generated by the inherently random nature of state and decision parameters involved in such a problem. A theoretical system is defined throughout its different elements. These elements consisting mainly of water resource components and end-users are described in terms of quantity, quality, and present and future associated risks and uncertainties. Models are identified, modified, and interfaced together to constitute an integrated water allocation optimization framework. This effort is a novel approach to confront the water allocation optimization problem while accounting for uncertainties associated with all its elements; thus resulting in a solution that correctly reflects the physical problem in hand.
Stochastic computing with biomolecular automata
Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud
2004-01-01
Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure. PMID:15215499
2012-03-01
HSSP), the In-College Schol - arship Program (ICSP), and the Enlisted Commissioning Program (ECP) [1]. The 5 entire scholarship program is managed by...and for which they are interested in volunteering. AFROTC is currently interested in developing techniques to better allocate schol - arships and...institutions are also concerned with ensuring that they enroll the most qualified students into their programs. Camarena-Anthony [8] examines schol - arship
AERODYNAMIC AND BLADING DESIGN OF MULTISTAGE AXIAL FLOW COMPRESSORS
NASA Technical Reports Server (NTRS)
Crouse, J. E.
1994-01-01
The axial-flow compressor is used for aircraft engines because it has distinct configuration and performance advantages over other compressor types. However, good potential performance is not easily obtained. The designer must be able to model the actual flows well enough to adequately predict aerodynamic performance. This computer program has been developed for computing the aerodynamic design of a multistage axial-flow compressor and, if desired, the associated blading geometry input for internal flow analysis. The aerodynamic solution gives velocity diagrams on selected streamlines of revolution at the blade row edges. The program yields aerodynamic and blading design results that can be directly used by flow and mechanical analysis codes. Two such codes are TSONIC, a blade-to-blade channel flow analysis code (COSMIC program LEW-10977), and MERIDL, a more detailed hub-to-shroud flow analysis code (COSMIC program LEW-12966). The aerodynamic and blading design program can reduce the time and effort required to obtain acceptable multistage axial-flow compressor configurations by generating good initial solutions and by being compatible with available analysis codes. The aerodynamic solution assumes steady, axisymmetric flow so that the problem is reduced to solving the two-dimensional flow field in the meridional plane. The streamline curvature method is used for the iterative aerodynamic solution at stations outside of the blade rows. If a blade design is desired, the blade elements are defined and stacked within the aerodynamic solution iteration. The blade element inlet and outlet angles are established by empirical incidence and deviation angles to the relative flow angles of the velocity diagrams. The blade element centerline is composed of two segments tangentially joined at a transition point. The local blade angle variation of each element can be specified as a fourth-degree polynomial function of path distance. Blade element thickness can also be specified with fourth-degree polynomial functions of path distance from the maximum thickness point. Input to the aerodynamic and blading design program includes the annulus profile, the overall compressor mass flow, the pressure ratio, and the rotative speed. A number of input parameters are also used to specify and control the blade row aerodynamics and geometry. The output from the aerodynamic solution has an overall blade row and compressor performance summary followed by blade element parameters for the individual blade rows. If desired, the blade coordinates in the streamwise direction for internal flow analysis codes and the coordinates on plane sections through blades for fabrication drawings may be stored and printed. The aerodynamic and blading design program for multistage axial-flow compressors is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 470K of 8 bit bytes. This program was developed in 1981.
Fitting of full Cobb-Douglas and full VRTS cost frontiers by solving goal programming problem
NASA Astrophysics Data System (ADS)
Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Madhusudhana Rao, B.
2017-11-01
The present research article first defines two popular production functions viz, Cobb-Douglas and VRTS production frontiers and their dual cost functions and then derives their cost limited maximal outputs. This paper tells us that the cost limited maximal output is cost efficient. Here the one side goal programming problem is proposed by which the full Cobb-Douglas cost frontier, full VRTS frontier can be fitted. This paper includes the framing of goal programming by which stochastic cost frontier and stochastic VRTS frontiers are fitted. Hasan et al. [1] used a parameter approach Stochastic Frontier Approach (SFA) to examine the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur stock Exchange (KLSE) market over the period 2005-2010. AshkanHassani [2] exposed Cobb-Douglas Production Functions application in construction schedule crashing and project risk analysis related to the duration of construction projects. Nan Jiang [3] applied Stochastic Frontier analysis to a panel of New Zealand dairy forms in 1998/99-2006/2007.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
NASA Technical Reports Server (NTRS)
Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.
1987-01-01
A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to in this paper are specific to the Cray X-MP line of computers and its associated SSD (Solid-state Storage Device). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.
AESS: Accelerated Exact Stochastic Simulation
NASA Astrophysics Data System (ADS)
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution method: The Accelerated Exact Stochastic Simulation (AESS) tool provides implementations of a wide variety of popular variations on the Gillespie method. Users can select the specific algorithm considered most appropriate. Comparisons between the methods and with other available implementations indicate that AESS provides the fastest known implementation of Gillespie's method for a variety of test models. Users may wish to execute ensembles of simulations to sweep parameters or to obtain better statistical results, so AESS supports acceleration of ensembles of simulation using parallel processing with MPI, SSE vector units on x86 processors, and/or using NVIDIA GPUs with CUDA.
NASA Astrophysics Data System (ADS)
Champion, Billy Ray
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. . Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. . The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of "of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency's traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Yu Wei; Michael Bevers; Erin J. Belval
2015-01-01
Initial attack dispatch rules can help shorten fire suppression response times by providing easy-to-follow recommendations based on fire weather, discovery time, location, and other factors that may influence fire behavior and the appropriate response. A new procedure is combined with a stochastic programming model and tested in this study for designing initial attack...
Using Multi-Objective Genetic Programming to Synthesize Stochastic Processes
NASA Astrophysics Data System (ADS)
Ross, Brian; Imada, Janine
Genetic programming is used to automatically construct stochastic processes written in the stochastic π-calculus. Grammar-guided genetic programming constrains search to useful process algebra structures. The time-series behaviour of a target process is denoted with a suitable selection of statistical feature tests. Feature tests can permit complex process behaviours to be effectively evaluated. However, they must be selected with care, in order to accurately characterize the desired process behaviour. Multi-objective evaluation is shown to be appropriate for this application, since it permits heterogeneous statistical feature tests to reside as independent objectives. Multiple undominated solutions can be saved and evaluated after a run, for determination of those that are most appropriate. Since there can be a vast number of candidate solutions, however, strategies for filtering and analyzing this set are required.
ERIC Educational Resources Information Center
Suvedi, Murari; Ghimire, Raju; Kaplowitz, Michael
2017-01-01
Purpose: This paper examines the factors affecting farmers' participation in extension programs and adoption of improved seed varieties in the hills of rural Nepal. Methodology/approach: Cross-sectional farm-level data were collected during July and August 2014. A sample of 198 farm households was selected for interviewing by using a multistage,…
A dynamic model of functioning of a bank
NASA Astrophysics Data System (ADS)
Malafeyev, Oleg; Awasthi, Achal; Zaitseva, Irina; Rezenkov, Denis; Bogdanova, Svetlana
2018-04-01
In this paper, we analyze dynamic programming as a novel approach to solve the problem of maximizing the profits of a bank. The mathematical model of the problem and the description of bank's work is described in this paper. The problem is then approached using the method of dynamic programming. Dynamic programming makes sure that the solutions obtained are globally optimal and numerically stable. The optimization process is set up as a discrete multi-stage decision process and solved with the help of dynamic programming.
Solution Methods for Stochastic Dynamic Linear Programs.
1980-12-01
16, No. 11, pp. 652-675, July 1970. [28] Glassey, C.R., "Dynamic linear programs for production scheduling", OR 19, pp. 45-56. 1971 . 129 Glassey, C.R...Huang, C.C., I. Vertinsky, W.T. Ziemba, ’Sharp bounds on the value of perfect information", OR 25, pp. 128-139, 1977. [37 Kall , P., ’Computational... 1971 . [701 Ziemba, W.T., *Computational algorithms for convex stochastic programs with simple recourse", OR 8, pp. 414-431, 1970. 131 UNCLASSI FIED
FERN - a Java framework for stochastic simulation and evaluation of reaction networks.
Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf
2008-08-29
Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.
IMPLICIT DUAL CONTROL BASED ON PARTICLE FILTERING AND FORWARD DYNAMIC PROGRAMMING.
Bayard, David S; Schumitzky, Alan
2010-03-01
This paper develops a sampling-based approach to implicit dual control. Implicit dual control methods synthesize stochastic control policies by systematically approximating the stochastic dynamic programming equations of Bellman, in contrast to explicit dual control methods that artificially induce probing into the control law by modifying the cost function to include a term that rewards learning. The proposed implicit dual control approach is novel in that it combines a particle filter with a policy-iteration method for forward dynamic programming. The integration of the two methods provides a complete sampling-based approach to the problem. Implementation of the approach is simplified by making use of a specific architecture denoted as an H-block. Practical suggestions are given for reducing computational loads within the H-block for real-time applications. As an example, the method is applied to the control of a stochastic pendulum model having unknown mass, length, initial position and velocity, and unknown sign of its dc gain. Simulation results indicate that active controllers based on the described method can systematically improve closed-loop performance with respect to other more common stochastic control approaches.
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury
2015-04-01
Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or the SDDP methods. The independent use of surface and groundwater can be examined with and without the aquifer. The ESPAT_DET, ESPATR and ESPAT_SDP modules were executed for the surface system, while the ESPAT_RA and the ESPAT_DET modules were run for the surface-groundwater system. The surface system's results show a similar performance between the ESPAT_SDP and ESPATR modules, with outperform the one showed by the current policies besides being outperformed by the ESPAT_DET results, which have the advantage of the perfect foresight. The surface-groundwater system's results show a robust situation in which the differences between the module's results and the current policies are lower due the use of pumped groundwater in the XX century crops when surface water is scarce. The results are realistic, with the deterministic optimization outperforming the stochastic one, which at the same time outperforms the current policies; showing that the tool is able to stochastically optimize river-aquifer water resources systems. We are currently working in the application of these tools in the analysis of changes in systems' operation under global change conditions. ACKNOWLEDGEMENT: This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) funds.
K-Minimax Stochastic Programming Problems
NASA Astrophysics Data System (ADS)
Nedeva, C.
2007-10-01
The purpose of this paper is a discussion of a numerical procedure based on the simplex method for stochastic optimization problems with partially known distribution functions. The convergence of this procedure is proved by the condition on dual problems.
Effects of Defensive Vehicle Handling on Novice Driver Safety : Phase 3. Data Analysis and Results
DOT National Transportation Integrated Search
2010-09-01
This project evaluates the effectiveness of a multistage driver education program for Montanas young : drivers. A total of 347 teenaged drivers who had completed high school driver education agreed to participate. : These drivers were randomly spl...
NASA Technical Reports Server (NTRS)
Farhat, Nabil H.
1987-01-01
Self-organization and learning is a distinctive feature of neural nets and processors that sets them apart from conventional approaches to signal processing. It leads to self-programmability which alleviates the problem of programming complexity in artificial neural nets. In this paper architectures for partitioning an optoelectronic analog of a neural net into distinct layers with prescribed interconnectivity pattern to enable stochastic learning by simulated annealing in the context of a Boltzmann machine are presented. Stochastic learning is of interest because of its relevance to the role of noise in biological neural nets. Practical considerations and methodologies for appreciably accelerating stochastic learning in such a multilayered net are described. These include the use of parallel optical computing of the global energy of the net, the use of fast nonvolatile programmable spatial light modulators to realize fast plasticity, optical generation of random number arrays, and an adaptive noisy thresholding scheme that also makes stochastic learning more biologically plausible. The findings reported predict optoelectronic chips that can be used in the realization of optical learning machines.
NASA Astrophysics Data System (ADS)
Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro
2017-08-01
This third part extends the theory of Generalized Poisson-Kac (GPK) processes to nonlinear stochastic models and to a continuum of states. Nonlinearity is treated in two ways: (i) as a dependence of the parameters (intensity of the stochastic velocity, transition rates) of the stochastic perturbation on the state variable, similarly to the case of nonlinear Langevin equations, and (ii) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself (nonlinear Fokker-Planck-Kac models). Several numerical and physical examples illustrate the theory. Gathering nonlinearity and a continuum of states, GPK theory provides a stochastic derivation of the nonlinear Boltzmann equation, furnishing a positive answer to the Kac’s program in kinetic theory. The transition from stochastic microdynamics to transport theory within the framework of the GPK paradigm is also addressed.
Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance
2003-07-21
Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance Vincent A. Cicirello CMU-RI-TR-03-27 Submitted in partial fulfillment...AND SUBTITLE Boosting Stochastic Problem Solvers Through Online Self-Analysis of Performance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...lead to the development of a search control framework, called QD-BEACON that uses online -generated statistical models of search performance to
New Results on a Stochastic Duel Game with Each Force Consisting of Heterogeneous Units
2013-02-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA NEW RESULTS ON A STOCHASTIC DUEL GAME WITH EACH FORCE CONSISTING OF...on a Stochastic Duel Game With Each Force Consisting of Heterogeneous Units 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...distribution is unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Two forces engage in a duel , with each force initially consisting of several
FINITE-STATE APPROXIMATIONS TO DENUMERABLE-STATE DYNAMIC PROGRAMS,
AIR FORCE OPERATIONS, LOGISTICS), (*INVENTORY CONTROL, DYNAMIC PROGRAMMING), (*DYNAMIC PROGRAMMING, APPROXIMATION(MATHEMATICS)), INVENTORY CONTROL, DECISION MAKING, STOCHASTIC PROCESSES, GAME THEORY, ALGORITHMS, CONVERGENCE
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
Wu, C B; Huang, G H; Liu, Z P; Zhen, J L; Yin, J G
2017-03-01
In this study, an inexact multistage stochastic mixed-integer programming (IMSMP) method was developed for supporting regional-scale energy system planning (EPS) associated with multiple uncertainties presented as discrete intervals, probability distributions and their combinations. An IMSMP-based energy system planning (IMSMP-ESP) model was formulated for Qingdao to demonstrate its applicability. Solutions which can provide optimal patterns of energy resources generation, conversion, transmission, allocation and facility capacity expansion schemes have been obtained. The results can help local decision makers generate cost-effective energy system management schemes and gain a comprehensive tradeoff between economic objectives and environmental requirements. Moreover, taking the CO 2 emissions scenarios mentioned in Part I into consideration, the anti-driving effect of carbon emissions on energy structure adjustment was studied based on the developed model and scenario analysis. Several suggestions can be concluded from the results: (a) to ensure the smooth realization of low-carbon and sustainable development, appropriate price control and fiscal subsidy on high-cost energy resources should be considered by the decision-makers; (b) compared with coal, natural gas utilization should be strongly encouraged in order to insure that Qingdao could reach the carbon discharges peak value in 2020; (c) to guarantee Qingdao's power supply security in the future, the construction of new power plants should be emphasised instead of enhancing the transmission capacity of grid infrastructure. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Adams, L. E.; Lund, J. R.; Moyle, P. B.; Quiñones, R. M.; Herman, J. D.; O'Rear, T. A.
2017-09-01
Building reservoir release schedules to manage engineered river systems can involve costly trade-offs between storing and releasing water. As a result, the design of release schedules requires metrics that quantify the benefit and damages created by releases to the downstream ecosystem. Such metrics should support making operational decisions under uncertain hydrologic conditions, including drought and flood seasons. This study addresses this need and develops a reservoir operation rule structure and method to maximize downstream environmental benefit while meeting human water demands. The result is a general approach for hedging downstream environmental objectives. A multistage stochastic mixed-integer nonlinear program with Markov Chains, identifies optimal "environmental hedging," releases to maximize environmental benefits subject to probabilistic seasonal hydrologic conditions, current, past, and future environmental demand, human water supply needs, infrastructure limitations, population dynamics, drought storage protection, and the river's carrying capacity. Environmental hedging "hedges bets" for drought by reducing releases for fish, sometimes intentionally killing some fish early to reduce the likelihood of large fish kills and storage crises later. This approach is applied to Folsom reservoir in California to support survival of fall-run Chinook salmon in the lower American River for a range of carryover and initial storage cases. Benefit is measured in terms of fish survival; maintaining self-sustaining native fish populations is a significant indicator of ecosystem function. Environmental hedging meets human demand and outperforms other operating rules, including the current Folsom operating strategy, based on metrics of fish extirpation and water supply reliability.
A microprocessor-based automation test system for the experiment of the multi-stage compressor
NASA Astrophysics Data System (ADS)
Zhang, Huisheng; Lin, Chongping
1991-08-01
An automation test system that is controlled by the microprocessor and used in the multistage compressor experiment is described. Based on the analysis of the compressor experiment performances, a complete hardware system structure is set up. It is composed of a IBM PC/XT computer, a large scale sampled data system, the moving machine with three directions, the scanners, the digital instrumentation and some output devices. A program structure of real-time software system is described. The testing results show that this test system can take the measure of many parameter magnitudes in the blade row places and on a boundary layer in different states. The automatic extent and the accuracy of experiment is increased and the experimental cost is reduced.
Design and experimental evaluation of compact radial-inflow turbines
NASA Technical Reports Server (NTRS)
Fredmonski, A. J.; Huber, F. W.; Roelke, R. J.; Simonyi, S.
1991-01-01
The application of a multistage 3D Euler solver to the aerodynamic design of two compact radial-inflow turbines is presented, along with experimental results evaluating and validating the designs. The objectives of the program were to design, fabricate, and rig test compact radial-inflow turbines with equal or better efficiency relative to conventional designs, while having 40 percent less rotor length than current traditionally-sized radial turbines. The approach to achieving these objectives was to apply a calibrated 3D multistage Euler code to accurately predict and control the high rotor flow passage velocities and high aerodynamic loadings resulting from the reduction in rotor length. A comparison of the advanced compact designs to current state-of-the-art configurations is presented.
The report summarizes activities conducted and results achieved in an EPA-sponsored program to demonstrate Limestone Injection Multistage Burner (LIMB) technology on a tangentially fired coal-burning utility boiler, Virginia Power's 180-MWe Yorktown Unit No. 2. his successfully d...
A guide to differences between stochastic point-source and stochastic finite-fault simulations
Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.
2009-01-01
Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
NASA Astrophysics Data System (ADS)
Zhu, Z. W.; Zhang, W. D.; Xu, J.
2014-03-01
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposed in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.
ASSESSING RESIDENTIAL EXPOSURE USING THE STOCHASTIC HUMAN EXPOSURE AND DOSE SIMULATION (SHEDS) MODEL
As part of a workshop sponsored by the Environmental Protection Agency's Office of Research and Development and Office of Pesticide Programs, the Aggregate Stochastic Human Exposure and Dose Simulation (SHEDS) Model was used to assess potential aggregate residential pesticide e...
Stochastic models of the Social Security trust funds.
Burdick, Clark; Manchester, Joyce
Each year in March, the Board of Trustees of the Social Security trust funds reports on the current and projected financial condition of the Social Security programs. Those programs, which pay monthly benefits to retired workers and their families, to the survivors of deceased workers, and to disabled workers and their families, are financed through the Old-Age, Survivors, and Disability Insurance (OASDI) Trust Funds. In their 2003 report, the Trustees present, for the first time, results from a stochastic model of the combined OASDI trust funds. Stochastic modeling is an important new tool for Social Security policy analysis and offers the promise of valuable new insights into the financial status of the OASDI trust funds and the effects of policy changes. The results presented in this article demonstrate that several stochastic models deliver broadly consistent results even though they use very different approaches and assumptions. However, they also show that the variation in trust fund outcomes differs as the approach and assumptions are varied. Which approach and assumptions are best suited for Social Security policy analysis remains an open question. Further research is needed before the promise of stochastic modeling is fully realized. For example, neither parameter uncertainty nor variability in ultimate assumption values is recognized explicitly in the analyses. Despite this caveat, stochastic modeling results are already shedding new light on the range and distribution of trust fund outcomes that might occur in the future.
A Theoretical Model for the Four-Stage Music-Industry Internship Program.
ERIC Educational Resources Information Center
Schenbeck, Lyn
1996-01-01
Describes student development through experiential learning in a four-stage internship within a college music-industry curriculum, and uses the Steinaker-Bell experiential taxonomy to show how embedding a multistage internship throughout the curriculum, rather than at the end, greatly enhances learning. Suggests ways in which the multistage…
1964-03-03
Two technicians apply insulation to the outer surface of the S-II second stage booster for the Saturn V moon rocket. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
1960-01-01
A NASA technician is dwarfed by the gigantic Third Stage (S-IVB) as it rests on supports in a facility at KSC. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
On stochastic control and optimal measurement strategies. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kramer, L. C.
1971-01-01
The control of stochastic dynamic systems is studied with particular emphasis on those which influence the quality or nature of the measurements which are made to effect control. Four main areas are discussed: (1) the meaning of stochastic optimality and the means by which dynamic programming may be applied to solve a combined control/measurement problem; (2) a technique by which it is possible to apply deterministic methods, specifically the minimum principle, to the study of stochastic problems; (3) the methods described are applied to linear systems with Gaussian disturbances to study the structure of the resulting control system; and (4) several applications are considered.
Stochastic Robust Mathematical Programming Model for Power System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Cong; Changhyeok, Lee; Haoyong, Chen
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Integration of progressive hedging and dual decomposition in stochastic integer programs
Watson, Jean -Paul; Guo, Ge; Hackebeil, Gabriel; ...
2015-04-07
We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less
Search Planning Under Incomplete Information Using Stochastic Optimization and Regression
2011-09-01
solve since they involve un- certainty and unknown parameters (see for example Shapiro et al., 2009; Wallace & Ziemba , 2005). One application area is...M16130.2E. 43 Wallace, S. W., & Ziemba , W. T. (2005). Applications of stochastic programming. Philadelphia, PA: Society for Industrial and Applied
Chen, Jianjun; Frey, H Christopher
2004-12-15
Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.
Novel methodology for wide-ranged multistage morphing waverider based on conical theory
NASA Astrophysics Data System (ADS)
Liu, Zhen; Liu, Jun; Ding, Feng; Xia, Zhixun
2017-11-01
This study proposes the wide-ranged multistage morphing waverider design method. The flow field structure and aerodynamic characteristics of multistage waveriders are also analyzed. In this method, the multistage waverider is generated in the same conical flowfield, which contains a free-stream surface and different compression-stream surfaces. The obtained results show that the introduction of the multistage waverider design method can solve the problem of aerodynamic performance deterioration in the off-design state and allow the vehicle to always maintain the optimal flight state. The multistage waverider design method, combined with transfiguration flight strategy, can lead to greater design flexibility and the optimization of hypersonic wide-ranged waverider vehicles.
XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations
NASA Astrophysics Data System (ADS)
Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.
2013-01-01
XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Tianjin Key Laboratory of Non-linear Dynamics and Chaos Control, 300072, Tianjin; Zhang, W. D., E-mail: zhangwenditju@126.com
2014-03-15
The non-linear dynamic characteristics and optimal control of a giant magnetostrictive film (GMF) subjected to in-plane stochastic excitation were studied. Non-linear differential items were introduced to interpret the hysteretic phenomena of the GMF, and the non-linear dynamic model of the GMF subjected to in-plane stochastic excitation was developed. The stochastic stability was analysed, and the probability density function was obtained. The condition of stochastic Hopf bifurcation and noise-induced chaotic response were determined, and the fractal boundary of the system's safe basin was provided. The reliability function was solved from the backward Kolmogorov equation, and an optimal control strategy was proposedmore » in the stochastic dynamic programming method. Numerical simulation shows that the system stability varies with the parameters, and stochastic Hopf bifurcation and chaos appear in the process; the area of the safe basin decreases when the noise intensifies, and the boundary of the safe basin becomes fractal; the system reliability improved through stochastic optimal control. Finally, the theoretical and numerical results were proved by experiments. The results are helpful in the engineering applications of GMF.« less
Turbomachinery Design Using CFD (La Conception des Turbomachines par l’Aerodynamique Numerique).
1994-05-01
Method for Flow Calculations in Turbomachines", Vrije Thompkins, W.T.,1981, "A Fortran Program for Calcu- Univ.Brussel, Dienst Stromingsmechanica, VUB- STR ...Model Equation for Simulating Flows in mung um Profile Multistage Turbomachinery MBB-Bericht Nr. UFE 1352, 1977 ASME paper 85-GT-226, Houston, March
77 FR 60956 - State Graduated Driver Licensing Incentive Grant
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... multi-stage licensing systems that require novice drivers younger than 21 years of age to comply with... crashes involving 16-year-old drivers. A recent study by the Insurance Institute for Highway Safety ranked... associated with 30 percent lower fatal crash rates among 15-17 year- olds compared to weak licensing programs...
ERIC Educational Resources Information Center
Inciardi, James A.; Martin, Steven S.; Butzin, Clifford A.
2004-01-01
With growing numbers of drug-involved offenders, substance abuse treatment has become a critical part of corrections. A multistage therapeutic community implemented in the Delaware correctional system has as its centerpiece a residential treatment program during work release--the transition between prison and community. An evaluation of this…
1969-01-01
A close-up view of the Apollo 11 command service module ready to be mated with the spacecraft LEM adapter of the third stage. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
1968-12-20
Searchlights penetrate the darkness surrounding Apollo 8 on Pad 39-A at Kennedy Space Center. This mission was the first manned flight using the Saturn V. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Parameter-based stochastic simulation of selection and breeding for multiple traits
Jennifer Myszewski; Thomas Byram; Floyd Bridgwater
2006-01-01
To increase the adaptability and economic value of plantations, tree improvement professionals often manage multiple traits in their breeding programs. When these traits are unfavorably correlated, breeders must weigh the economic importance of each trait and select for a desirable aggregate phenotype. Stochastic simulation allows breeders to test the effects of...
Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Xiao; Dong, Jin; Djouadi, Seddik M
2015-01-01
The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, wheremore » the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.« less
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
GillesPy: A Python Package for Stochastic Model Building and Simulation.
Abel, John H; Drawert, Brian; Hellander, Andreas; Petzold, Linda R
2016-09-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community.
GillesPy: A Python Package for Stochastic Model Building and Simulation
Abel, John H.; Drawert, Brian; Hellander, Andreas; Petzold, Linda R.
2017-01-01
GillesPy is an open-source Python package for model construction and simulation of stochastic biochemical systems. GillesPy consists of a Python framework for model building and an interface to the StochKit2 suite of efficient simulation algorithms based on the Gillespie stochastic simulation algorithms (SSA). To enable intuitive model construction and seamless integration into the scientific Python stack, we present an easy to understand, action-oriented programming interface. Here, we describe the components of this package and provide a detailed example relevant to the computational biology community. PMID:28630888
NASA Astrophysics Data System (ADS)
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response theory for small deviations from equilibrium, in which a general framework is constructed from the analysis of non-equilibrium states close to equilibrium. In a next step, Prigogine and others developed linear irreversible thermodynamics, which establishes relations between transport coefficients and entropy production on a phenomenological level in terms of thermodynamic forces and fluxes. However, beyond the realm of linear response no general theoretical results were available for quite a long time. This situation has changed drastically over the last 20 years with the development of stochastic thermodynamics, revealing that the range of validity of thermodynamic statements can indeed be extended deep into the non-equilibrium regime. Early developments in that direction trace back to the observations of symmetry relations between the probabilities for entropy production and entropy annihilation in non-equilibrium steady states [5-8] (nowadays categorized in the class of so-called detailed fluctuation theorems), and the derivations of the Bochkov-Kuzovlev [9, 10] and Jarzynski relations [11] (which are now classified as so-called integral fluctuation theorems). Apart from its fundamental theoretical interest, the developments in stochastic thermodynamics have experienced an additional boost from the recent experimental progress in fabricating, manipulating, controlling and observing systems on the micro- and nano-scale. These advances are not only of formidable use for probing and monitoring biological processes on the cellular, sub-cellular and molecular level, but even include the realization of a microscopic thermodynamic heat engine [12] or the experimental verification of Landauer's principle in a colloidal system [13]. The scientific program Stochastic Thermodynamics held between 4 and 15 March 2013, and hosted by The Nordic Institute for Theoretical Physics (Nordita), was attended by more than 50 scientists from the Nordic countries and elsewhere, amongst them many leading experts in the field. During the program, the most recent developments, open questions and new ideas in stochastic thermodynamics were presented and discussed. From the talks and debates, the notion of information in stochastic thermodynamics, the fundamental properties of entropy production (rate) in non-equilibrium, the efficiency of small thermodynamic machines and the characteristics of optimal protocols for the applied (cyclic) forces were crystallizing as main themes. Surprisingly, the long-studied adiabatic piston, its peculiarities and its relation to stochastic thermodynamics were also the subject of intense discussions. The comment on the Nordita program Stochastic Thermodynamics published in this issue of Physica Scripta exploits the Jarzynski relation for determining free energy differences in the adiabatic piston. This scientific program and the contribution presented here were made possible by the financial and administrative support of The Nordic Institute for Theoretical Physics.
Two-stage fuzzy-stochastic robust programming: a hybrid model for regional air quality management.
Li, Yongping; Huang, Guo H; Veawab, Amornvadee; Nie, Xianghui; Liu, Lei
2006-08-01
In this study, a hybrid two-stage fuzzy-stochastic robust programming (TFSRP) model is developed and applied to the planning of an air-quality management system. As an extension of existing fuzzy-robust programming and two-stage stochastic programming methods, the TFSRP can explicitly address complexities and uncertainties of the study system without unrealistic simplifications. Uncertain parameters can be expressed as probability density and/or fuzzy membership functions, such that robustness of the optimization efforts can be enhanced. Moreover, economic penalties as corrective measures against any infeasibilities arising from the uncertainties are taken into account. This method can, thus, provide a linkage to predefined policies determined by authorities that have to be respected when a modeling effort is undertaken. In its solution algorithm, the fuzzy decision space can be delimited through specification of the uncertainties using dimensional enlargement of the original fuzzy constraints. The developed model is applied to a case study of regional air quality management. The results indicate that reasonable solutions have been obtained. The solutions can be used for further generating pollution-mitigation alternatives with minimized system costs and for providing a more solid support for sound environmental decisions.
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Holm, Darryl D.
2018-01-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
Stochastic Watershed Models for Risk Based Decision Making
NASA Astrophysics Data System (ADS)
Vogel, R. M.
2017-12-01
Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Holm, Darryl D.
2018-06-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
Multistage remote sensing: toward an annual national inventory
Raymond L. Czaplewski
1999-01-01
Remote sensing can improve efficiency of statistical information. Landsat data can identify and map a few broad categories of forest cover and land use. However, more-detailed information requires a sample of higher-resolution imagery, which costs less than field data but considerably more than Landsat data. A national remote sensing program would be a major...
1965-03-01
The hydrogen-powered second stage is being lowered into place during the final phase of fabrication of the Saturn V moon rocket at North American's Seal Beach, California facility. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
1968-02-06
Apollo 6, the second and last of the unmarned Saturn V test flights, is slowly transported past the Vehicle Assembly Building on the way to launch pad 39-A. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
Stochastic Education in Childhood: Examining the Learning of Teachers and Students
ERIC Educational Resources Information Center
de Souza, Antonio Carlos; Lopes, Celi Espasandin; de Oliveira, Débora
2014-01-01
This paper presents discussions on stochastic education in early childhood, based on two doctoral research projects carried out with groups of preschool teachers from public schools in the Brazilian cities of Suzano and São Paulo who were participating in a continuing education program. The objective is to reflect on the analysis of two didactic…
NASA airframe structural integrity program
NASA Technical Reports Server (NTRS)
Harris, Charles E.
1990-01-01
NASA initiated a research program with the long-term objective of supporting the aerospace industry in addressing issues related to the aging of the commercial transport fleet. The program combines advanced fatigue crack growth prediction methodology with innovative nondestructive examination technology with the focus on multi-stage damage (MSD) at rivited connections. A fracture mechanics evaluation of the concept of pressure proof testing the fuselage to screen for MSD was completed. A successful laboratory demonstration of the ability of the thermal flux method to detect disbonds at rivited lap splice joints was conducted. All long-term program elements were initiated, and the plans for the methodology verification program are being coordinated with the airframe manufacturers.
Investment portfolio of a pension fund: Stochastic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosch-Princep, M.; Fontanals-Albiol, H.
1994-12-31
This paper presents a stochastic programming model that aims at getting the optimal investment portfolio of a Pension Funds. The model has been designed bearing in mind the liabilities of the Funds to its members. The essential characteristic of the objective function and the constraints is the randomness of the coefficients and the right hand side of the constraints, so it`s necessary to use techniques of stochastic mathematical programming to get information about the amount of money that should be assigned to each sort of investment. It`s important to know the risky attitude of the person that has to takemore » decisions towards running risks. It incorporates the relation between the different coefficients of the objective function and constraints of each period of temporal horizon, through lineal and discrete random processes. Likewise, it includes the hypotheses that are related to Spanish law concerning the subject of Pension Funds.« less
Interactive two-stage stochastic fuzzy programming for water resources management.
Wang, S; Huang, G H
2011-08-01
In this study, an interactive two-stage stochastic fuzzy programming (ITSFP) approach has been developed through incorporating an interactive fuzzy resolution (IFR) method within an inexact two-stage stochastic programming (ITSP) framework. ITSFP can not only tackle dual uncertainties presented as fuzzy boundary intervals that exist in the objective function and the left- and right-hand sides of constraints, but also permit in-depth analyses of various policy scenarios that are associated with different levels of economic penalties when the promised policy targets are violated. A management problem in terms of water resources allocation has been studied to illustrate applicability of the proposed approach. The results indicate that a set of solutions under different feasibility degrees has been generated for planning the water resources allocation. They can help the decision makers (DMs) to conduct in-depth analyses of tradeoffs between economic efficiency and constraint-violation risk, as well as enable them to identify, in an interactive way, a desired compromise between satisfaction degree of the goal and feasibility of the constraints (i.e., risk of constraint violation). Copyright © 2011 Elsevier Ltd. All rights reserved.
The role of predictive uncertainty in the operational management of reservoirs
NASA Astrophysics Data System (ADS)
Todini, E.
2014-09-01
The present work deals with the operational management of multi-purpose reservoirs, whose optimisation-based rules are derived, in the planning phase, via deterministic (linear and nonlinear programming, dynamic programming, etc.) or via stochastic (generally stochastic dynamic programming) approaches. In operation, the resulting deterministic or stochastic optimised operating rules are then triggered based on inflow predictions. In order to fully benefit from predictions, one must avoid using them as direct inputs to the reservoirs, but rather assess the "predictive knowledge" in terms of a predictive probability density to be operationally used in the decision making process for the estimation of expected benefits and/or expected losses. Using a theoretical and extremely simplified case, it will be shown why directly using model forecasts instead of the full predictive density leads to less robust reservoir management decisions. Moreover, the effectiveness and the tangible benefits for using the entire predictive probability density instead of the model predicted values will be demonstrated on the basis of the Lake Como management system, operational since 1997, as well as on the basis of a case study on the lake of Aswan.
Kemp, Mark A
2015-11-03
A high power RF device has an electron beam cavity, a modulator, and a circuit for feed-forward energy recovery from a multi-stage depressed collector to the modulator. The electron beam cavity include a cathode, an anode, and the multi-stage depressed collector, and the modulator is configured to provide pulses to the cathode. Voltages of the electrode stages of the multi-stage depressed collector are allowed to float as determined by fixed impedances seen by the electrode stages. The energy recovery circuit includes a storage capacitor that dynamically biases potentials of the electrode stages of the multi-stage depressed collector and provides recovered energy from the electrode stages of the multi-stage depressed collector to the modulator. The circuit may also include a step-down transformer, where the electrode stages of the multi-stage depressed collector are electrically connected to separate taps on the step-down transformer.
Programming Probabilistic Structural Analysis for Parallel Processing Computer
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.
1991-01-01
The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.
Simulation-based planning for theater air warfare
NASA Astrophysics Data System (ADS)
Popken, Douglas A.; Cox, Louis A., Jr.
2004-08-01
Planning for Theatre Air Warfare can be represented as a hierarchy of decisions. At the top level, surviving airframes must be assigned to roles (e.g., Air Defense, Counter Air, Close Air Support, and AAF Suppression) in each time period in response to changing enemy air defense capabilities, remaining targets, and roles of opposing aircraft. At the middle level, aircraft are allocated to specific targets to support their assigned roles. At the lowest level, routing and engagement decisions are made for individual missions. The decisions at each level form a set of time-sequenced Courses of Action taken by opposing forces. This paper introduces a set of simulation-based optimization heuristics operating within this planning hierarchy to optimize allocations of aircraft. The algorithms estimate distributions for stochastic outcomes of the pairs of Red/Blue decisions. Rather than using traditional stochastic dynamic programming to determine optimal strategies, we use an innovative combination of heuristics, simulation-optimization, and mathematical programming. Blue decisions are guided by a stochastic hill-climbing search algorithm while Red decisions are found by optimizing over a continuous representation of the decision space. Stochastic outcomes are then provided by fast, Lanchester-type attrition simulations. This paper summarizes preliminary results from top and middle level models.
State-space self-tuner for on-line adaptive control
NASA Technical Reports Server (NTRS)
Shieh, L. S.
1994-01-01
Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.
System, methods and apparatus for program optimization for multi-threaded processor architectures
Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E
2015-01-06
Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1979-01-01
The net board foot volume (Scribner log rule) of the standing Ponderosa pine timber on the Defiance Unit of the Navajo Nation's forested land was estimated using a multistage forest volume inventory scheme with variable sample selection probabilities. The inventory designed to accomplish this task required that both LANDSAT MSS digital data and aircraft acquired data be used to locate one acre ground splits, which were subsequently visited by ground teams conducting detailed tree measurements using an optical dendrometer. The dendrometer measurements were then punched on computer input cards and were entered in a computer program developed by the U.S. Forest Service. The resulting individual tree volume estimates were then expanded through the use of a statistically defined equation to produce the volume estimate for the entire area which includes 192,026 acres and is approximately a 44% the total forested area of the Navajo Nation.
NASA Technical Reports Server (NTRS)
Curren, Arthur N.; Palmer, Raymond W.; Force, Dale A.; Dombro, Louis; Long, James A.
1987-01-01
A NASA-sponsored research and development contract has been established with the Watkins-Johnson Company to fabricate high-efficiency 20-watt helical traveling wave tubes (TWTs) operating at 8.4 to 8.43 GHz. The TWTs employ dynamic velocity tapers (DVTs) and advanced multistage depressed collectors (MDCs) having electrodes with low secondary electron emission characteristics. The TWT designs include two different DVTs; one for maximum efficiency and the other for minimum distortion and phase shift. The MDC designs include electrodes of untreated and ion-textured graphite as well as copper which has been treated for secondary electron emission suppression. Objectives of the program include achieving at least 55 percent overall efficiency. Tests with the first TWTs (with undepressed collectors) indicate good agreement between predicted and measured RF efficiencies with as high as 30 percent improvement in RF efficiency over conventional helix designs.
Computer program for aerodynamic and blading design of multistage axial-flow compressors
NASA Technical Reports Server (NTRS)
Crouse, J. E.; Gorrell, W. T.
1981-01-01
A code for computing the aerodynamic design of a multistage axial-flow compressor and, if desired, the associated blading geometry input for internal flow analysis codes is presented. Compressible flow, which is assumed to be steady and axisymmetric, is the basis for a two-dimensional solution in the meridional plane with viscous effects modeled by pressure loss coefficients and boundary layer blockage. The radial equation of motion and the continuity equation are solved with the streamline curvature method on calculation stations outside the blade rows. The annulus profile, mass flow, pressure ratio, and rotative speed are input. A number of other input parameters specify and control the blade row aerodynamics and geometry. In particular, blade element centerlines and thicknesses can be specified with fourth degree polynomials for two segments. The output includes a detailed aerodynamic solution and, if desired, blading coordinates that can be used for internal flow analysis codes.
1968-02-06
A bird's-eye view of Apollo 6 and its gantry leaving the Vehicle Assembly Building on the transporter heading to the launch site on Pad 39-A at Kennedy Space Center. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
1967-01-01
This photo shows the Saturn V first stage being lowered to the ground following a successful test to determine the effects of continual vibrations simulating the effects of an actual launch. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
1965-04-26
Two technicians watch carefully as cables prepare to lift a J-2 engine into a test stand. The J-2 powered the second stage and the third stage of the Saturn V moon rocket. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
1969-07-01
A technician can be seen working atop the white room across from the escape tower of the Apollo 11 spacecraft a few days prior to the launch of the Saturn V moon rocket. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams
1960-01-01
The powerful J-2 engine is prominent in this photograph of a Saturn V Third Stage (S-IVB) resting on a transporter in the Manufacturing Facility at Marshall Space Flight Center in Huntsville, Alabama. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
[Stochastic model of infectious diseases transmission].
Ruiz-Ramírez, Juan; Hernández-Rodríguez, Gabriela Eréndira
2009-01-01
Propose a mathematic model that shows how population structure affects the size of infectious disease epidemics. This study was conducted during 2004 at the University of Colima. It used generalized small-world network topology to represent contacts that occurred within and between families. To that end, two programs in MATLAB were conducted to calculate the efficiency of the network. The development of a program in the C programming language was also required, that represents the stochastic susceptible-infectious-removed model, and simultaneous results were obtained for the number of infected people. An increased number of families connected by meeting sites impacted the size of the infectious diseases by roughly 400%. Population structure influences the rapid spread of infectious diseases, reaching epidemic effects.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
Water resources planning and management : A stochastic dual dynamic programming approach
NASA Astrophysics Data System (ADS)
Goor, Q.; Pinte, D.; Tilmant, A.
2008-12-01
Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.
NASA Astrophysics Data System (ADS)
Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi
2017-01-01
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.
Huang, Wei; Shi, Jun; Yen, R T
2012-12-01
The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.
NASA Astrophysics Data System (ADS)
Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui
2018-04-01
Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.
2017-01-05
module. 15. SUBJECT TERMS Logistics, attrition, discrete event simulation, Simkit, LBC 16. SECURITY CLASSIFICATION OF: Unclassified 17. LIMITATION...stochastics, and discrete event model programmed in Java building largely on the Simkit library. The primary purpose of the LBC model is to support...equations makes them incompatible with the discrete event construct of LBC. Bullard further advances this methodology by developing a stochastic
Toward Control of Universal Scaling in Critical Dynamics
2016-01-27
program that aims to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Uwe Tauber Uwe C. T? uber , Michel Pleimling, Daniel J. Stilwell 611102 c. THIS PAGE The public reporting burden...to synergistically combine two powerful and very successful theories for non-linear stochastic dynamics of cooperative multi-component systems, namely
Mathematical Sciences Division 1992 Programs
1992-10-01
statistical theory that underlies modern signal analysis . There is a strong emphasis on stochastic processes and time series , particularly those which...include optimal resource planning and real- time scheduling of stochastic shop-floor processes. Scheduling systems will be developed that can adapt to...make forecasts for the length-of-service time series . Protocol analysis of these sessions will be used to idenify relevant contextual features and to
40 CFR 600.316-78 - Multistage manufacture.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Multistage manufacture. 600.316-78 Section 600.316-78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY... and Later Model Year Automobiles-Labeling § 600.316-78 Multistage manufacture. Where more than one...
Exposure Control Using Adaptive Multi-Stage Item Bundles.
ERIC Educational Resources Information Center
Luecht, Richard M.
This paper presents a multistage adaptive testing test development paradigm that promises to handle content balancing and other test development needs, psychometric reliability concerns, and item exposure. The bundled multistage adaptive testing (BMAT) framework is a modification of the computer-adaptive sequential testing framework introduced by…
Decentralized stochastic control
NASA Technical Reports Server (NTRS)
Speyer, J. L.
1980-01-01
Decentralized stochastic control is characterized by being decentralized in that the information to one controller is not the same as information to another controller. The system including the information has a stochastic or uncertain component. This complicates the development of decision rules which one determines under the assumption that the system is deterministic. The system is dynamic which means the present decisions affect future system responses and the information in the system. This circumstance presents a complex problem where tools like dynamic programming are no longer applicable. These difficulties are discussed from an intuitive viewpoint. Particular assumptions are introduced which allow a limited theory which produces mechanizable affine decision rules.
TADS: A CFD-Based Turbomachinery Analysis and Design System with GUI: Methods and Results. 2.0
NASA Technical Reports Server (NTRS)
Koiro, M. J.; Myers, R. A.; Delaney, R. A.
1999-01-01
The primary objective of this study was the development of a Computational Fluid Dynamics (CFD) based turbomachinery airfoil analysis and design system, controlled by a Graphical User Interface (GUI). The computer codes resulting from this effort are referred to as TADS (Turbomachinery Analysis and Design System). This document is the Final Report describing the theoretical basis and analytical results from the TADS system developed under Task 10 of NASA Contract NAS3-27394, ADPAC System Coupling to Blade Analysis & Design System GUI, Phase II-Loss, Design and. Multi-stage Analysis. TADS couples a throughflow solver (ADPAC) with a quasi-3D blade-to-blade solver (RVCQ3D) or a 3-D solver with slip condition on the end walls (B2BADPAC) in an interactive package. Throughflow analysis and design capability was developed in ADPAC through the addition of blade force and blockage terms to the governing equations. A GUI was developed to simplify user input and automate the many tasks required to perform turbomachinery analysis and design. The coupling of the various programs was done in such a way that alternative solvers or grid generators could be easily incorporated into the TADS framework. Results of aerodynamic calculations using the TADS system are presented for a multistage compressor, a multistage turbine, two highly loaded fans, and several single stage compressor and turbine example cases.
A Testlet Assembly Design for Adaptive Multistage Tests
ERIC Educational Resources Information Center
Luecht, Richard; Brumfield, Terry; Breithaupt, Krista
2006-01-01
This article describes multistage tests and some practical test development considerations related to the design and implementation of a multistage test, using the Uniform CPA (certified public accountant) Examination as a case study. The article further discusses the use of automated test assembly procedures in an operational context to produce…
Longitudinal Multistage Testing
ERIC Educational Resources Information Center
Pohl, Steffi
2013-01-01
This article introduces longitudinal multistage testing (lMST), a special form of multistage testing (MST), as a method for adaptive testing in longitudinal large-scale studies. In lMST designs, test forms of different difficulty levels are used, whereas the values on a pretest determine the routing to these test forms. Since lMST allows for…
ERIC Educational Resources Information Center
Bamani, Sanoussi; Toubali, Emily; Diarra, Sadio; Goita, Seydou; Berte, Zana; Coulibaly, Famolo; Sangare, Hama; Tuinsma, Marjon; Zhang, Yaobi; Dembele, Benoit; Melvin, Palesa; MacArthur, Chad
2013-01-01
The National Blindness Prevention Program in Mali has broadcast messages on the radio about trachoma as part of the country's trachoma elimination strategy since 2008. In 2011, a radio impact survey using multi-stage cluster sampling was conducted in the regions of Kayes and Segou to assess radio listening habits, coverage of the broadcasts,…
NASA Technical Reports Server (NTRS)
Hadden, G. B.; Kleckner, R. J.; Ragen, M. A.; Dyba, G. J.; Sheynin, L.
1981-01-01
The material presented is structured to guide the user in the practical and correct implementation of PLANETSYS which is capable of simulating the thermomechanical performance of a multistage planetary power transmission. In this version of PLANETSYS, the user can select either SKF or NASA models in calculating lubricant film thickness and traction forces.
1960-01-01
This small group of unidentified officials is dwarfed by the gigantic size of the Saturn V first stage (S-1C) at the shipping area of the Manufacturing Engineering Laboratory at Marshall Space Flight Center in Huntsville, Alabama. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
NASA Technical Reports Server (NTRS)
Mehra, R. K.; Rouhani, R.; Jones, S.; Schick, I.
1980-01-01
A model to assess the value of improved information regarding the inventories, productions, exports, and imports of crop on a worldwide basis is discussed. A previously proposed model is interpreted in a stochastic control setting and the underlying assumptions of the model are revealed. In solving the stochastic optimization problem, the Markov programming approach is much more powerful and exact as compared to the dynamic programming-simulation approach of the original model. The convergence of a dual variable Markov programming algorithm is shown to be fast and efficient. A computer program for the general model of multicountry-multiperiod is developed. As an example, the case of one country-two periods is treated and the results are presented in detail. A comparison with the original model results reveals certain interesting aspects of the algorithms and the dependence of the value of information on the incremental cost function.
Hybrid Differential Dynamic Programming with Stochastic Search
NASA Technical Reports Server (NTRS)
Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob A.
2016-01-01
Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASA's Dawn mission. The Dawn trajectory was designed with the DDP-based Static/Dynamic Optimal Control algorithm used in the Mystic software.1 Another recently developed method, Hybrid Differential Dynamic Programming (HDDP),2, 3 is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.
SLFP: a stochastic linear fractional programming approach for sustainable waste management.
Zhu, H; Huang, G H
2011-12-01
A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
Condition-dependent mate choice: A stochastic dynamic programming approach.
Frame, Alicia M; Mills, Alex F
2014-09-01
We study how changing female condition during the mating season and condition-dependent search costs impact female mate choice, and what strategies a female could employ in choosing mates to maximize her own fitness. We address this problem via a stochastic dynamic programming model of mate choice. In the model, a female encounters males sequentially and must choose whether to mate or continue searching. As the female searches, her own condition changes stochastically, and she incurs condition-dependent search costs. The female attempts to maximize the quality of the offspring, which is a function of the female's condition at mating and the quality of the male with whom she mates. The mating strategy that maximizes the female's net expected reward is a quality threshold. We compare the optimal policy with other well-known mate choice strategies, and we use simulations to examine how well the optimal policy fares under imperfect information. Copyright © 2014 Elsevier Inc. All rights reserved.
Stochastic Optimization for Unit Commitment-A Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Qipeng P.; Wang, Jianhui; Liu, Andrew L.
2015-07-01
Optimization models have been widely used in the power industry to aid the decision-making process of scheduling and dispatching electric power generation resources, a process known as unit commitment (UC). Since UC's birth, there have been two major waves of revolution on UC research and real life practice. The first wave has made mixed integer programming stand out from the early solution and modeling approaches for deterministic UC, such as priority list, dynamic programming, and Lagrangian relaxation. With the high penetration of renewable energy, increasing deregulation of the electricity industry, and growing demands on system reliability, the next wave ismore » focused on transitioning from traditional deterministic approaches to stochastic optimization for unit commitment. Since the literature has grown rapidly in the past several years, this paper is to review the works that have contributed to the modeling and computational aspects of stochastic optimization (SO) based UC. Relevant lines of future research are also discussed to help transform research advances into real-world applications.« less
A supplier selection and order allocation problem with stochastic demands
NASA Astrophysics Data System (ADS)
Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua
2011-08-01
We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.
NASA Astrophysics Data System (ADS)
Suo, M. Q.; Li, Y. P.; Huang, G. H.
2011-09-01
In this study, an inventory-theory-based interval-parameter two-stage stochastic programming (IB-ITSP) model is proposed through integrating inventory theory into an interval-parameter two-stage stochastic optimization framework. This method can not only address system uncertainties with complex presentation but also reflect transferring batch (the transferring quantity at once) and period (the corresponding cycle time) in decision making problems. A case of water allocation problems in water resources management planning is studied to demonstrate the applicability of this method. Under different flow levels, different transferring measures are generated by this method when the promised water cannot be met. Moreover, interval solutions associated with different transferring costs also have been provided. They can be used for generating decision alternatives and thus help water resources managers to identify desired policies. Compared with the ITSP method, the IB-ITSP model can provide a positive measure for solving water shortage problems and afford useful information for decision makers under uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novikov, V.
1991-05-01
The U.S. Army's detailed equipment decontamination process is a stochastic flow shop which has N independent non-identical jobs (vehicles) which have overlapping processing times. This flow shop consists of up to six non-identical machines (stations). With the exception of one station, the processing times of the jobs are random variables. Based on an analysis of the processing times, the jobs for the 56 Army heavy division companies were scheduled according to the best shortest expected processing time - longest expected processing time (SEPT-LEPT) sequence. To assist in this scheduling the Gap Comparison Heuristic was developed to select the best SEPT-LEPTmore » schedule. This schedule was then used in balancing the detailed equipment decon line in order to find the best possible site configuration subject to several constraints. The detailed troop decon line, in which all jobs are independent and identically distributed, was then balanced. Lastly, an NBC decon optimization computer program was developed using the scheduling and line balancing results. This program serves as a prototype module for the ANBACIS automated NBC decision support system.... Decontamination, Stochastic flow shop, Scheduling, Stochastic scheduling, Minimization of the makespan, SEPT-LEPT Sequences, Flow shop line balancing, ANBACIS.« less
NASA Astrophysics Data System (ADS)
Mainardi Fan, Fernando; Schwanenberg, Dirk; Alvarado, Rodolfo; Assis dos Reis, Alberto; Naumann, Steffi; Collischonn, Walter
2016-04-01
Hydropower is the most important electricity source in Brazil. During recent years, it accounted for 60% to 70% of the total electric power supply. Marginal costs of hydropower are lower than for thermal power plants, therefore, there is a strong economic motivation to maximize its share. On the other hand, hydropower depends on the availability of water, which has a natural variability. Its extremes lead to the risks of power production deficits during droughts and safety issues in the reservoir and downstream river reaches during flood events. One building block of the proper management of hydropower assets is the short-term forecast of reservoir inflows as input for an online, event-based optimization of its release strategy. While deterministic forecasts and optimization schemes are the established techniques for the short-term reservoir management, the use of probabilistic ensemble forecasts and stochastic optimization techniques receives growing attention and a number of researches have shown its benefit. The present work shows one of the first hindcasting and closed-loop control experiments for a multi-purpose hydropower reservoir in a tropical region in Brazil. The case study is the hydropower project (HPP) Três Marias, located in southeast Brazil. The HPP reservoir is operated with two main objectives: (i) hydroelectricity generation and (ii) flood control at Pirapora City located 120 km downstream of the dam. In the experiments, precipitation forecasts based on observed data, deterministic and probabilistic forecasts with 50 ensemble members of the ECMWF are used as forcing of the MGB-IPH hydrological model to generate streamflow forecasts over a period of 2 years. The online optimization depends on a deterministic and multi-stage stochastic version of a model predictive control scheme. Results for the perfect forecasts show the potential benefit of the online optimization and indicate a desired forecast lead time of 30 days. In comparison, the use of actual forecasts with shorter lead times of up to 15 days shows the practical benefit of actual operational data. It appears that the use of stochastic optimization combined with ensemble forecasts leads to a significant higher level of flood protection without compromising the HPP's energy production.
An adaptive technique for a redundant-sensor navigation system.
NASA Technical Reports Server (NTRS)
Chien, T.-T.
1972-01-01
An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. This adaptive system is structured as a multistage stochastic process of detection, identification, and compensation. It is shown that the detection system can be effectively constructed on the basis of a design value, specified by mission requirements, of the unknown parameter in the actual system, and of a degradation mode in the form of a constant bias jump. A suboptimal detection system on the basis of Wald's sequential analysis is developed using the concept of information value and information feedback. The developed system is easily implemented, and demonstrates a performance remarkably close to that of the optimal nonlinear detection system. An invariant transformation is derived to eliminate the effect of nuisance parameters such that the ambiguous identification system can be reduced to a set of disjoint simple hypotheses tests. By application of a technique of decoupled bias estimation in the compensation system the adaptive system can be operated without any complicated reorganization.
Nemo: an evolutionary and population genetics programming framework.
Guillaume, Frédéric; Rougemont, Jacques
2006-10-15
Nemo is an individual-based, genetically explicit and stochastic population computer program for the simulation of population genetics and life-history trait evolution in a metapopulation context. It comes as both a C++ programming framework and an executable program file. Its object-oriented programming design gives it the flexibility and extensibility needed to implement a large variety of forward-time evolutionary models. It provides developers with abstract models allowing them to implement their own life-history traits and life-cycle events. Nemo offers a large panel of population models, from the Island model to lattice models with demographic or environmental stochasticity and a variety of already implemented traits (deleterious mutations, neutral markers and more), life-cycle events (mating, dispersal, aging, selection, etc.) and output operators for saving data and statistics. It runs on all major computer platforms including parallel computing environments. The source code, binaries and documentation are available under the GNU General Public License at http://nemo2.sourceforge.net.
Stochastic hyperfine interactions modeling library
NASA Astrophysics Data System (ADS)
Zacate, Matthew O.; Evenson, William E.
2011-04-01
The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized; however, there was a need to develop supplementary code to find an orthonormal set of (left and right) eigenvectors of complex, non-Hermitian matrices. In addition, example code is provided to illustrate the use of SHIML to generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A can be neglected. Program summaryProgram title: SHIML Catalogue identifier: AEIF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 3 No. of lines in distributed program, including test data, etc.: 8224 No. of bytes in distributed program, including test data, etc.: 312 348 Distribution format: tar.gz Programming language: C Computer: Any Operating system: LINUX, OS X RAM: Varies Classification: 7.4 External routines: TAPP [1], BLAS [2], a C-interface to BLAS [3], and LAPACK [4] Nature of problem: In condensed matter systems, hyperfine methods such as nuclear magnetic resonance (NMR), Mössbauer effect (ME), muon spin rotation (μSR), and perturbed angular correlation spectroscopy (PAC) measure electronic and magnetic structure within Angstroms of nuclear probes through the hyperfine interaction. When interactions fluctuate at rates comparable to the time scale of a hyperfine method, there is a loss in signal coherence, and spectra are damped. The degree of damping can be used to determine fluctuation rates, provided that theoretical expressions for spectra can be derived for relevant physical models of the fluctuations. SHIML provides routines to help researchers quickly develop code to incorporate stochastic models of fluctuating hyperfine interactions in calculations of hyperfine spectra. Solution method: Calculations are based on the method for modeling stochastic hyperfine interactions for PAC by Winkler and Gerdau [5]. The method is extended to include other hyperfine methods following the work of Dattagupta [6]. The code provides routines for reading model information from text files, allowing researchers to develop new models quickly without the need to modify computer code for each new model to be considered. Restrictions: In the present version of the code, only methods that measure the hyperfine interaction on one probe spin state, such as PAC, μSR, and NMR, are supported. Running time: Varies
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Probabilistic DHP adaptive critic for nonlinear stochastic control systems.
Herzallah, Randa
2013-06-01
Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multi-stage internal gear/turbine fuel pump
Maier, Eugen; Raney, Michael Raymond
2004-07-06
A multi-stage internal gear/turbine fuel pump for a vehicle includes a housing having an inlet and an outlet and a motor disposed in the housing. The multi-stage internal gear/turbine fuel pump also includes a shaft extending axially and disposed in the housing. The multi-stage internal gear/turbine fuel pump further includes a plurality of pumping modules disposed axially along the shaft. One of the pumping modules is a turbine pumping module and another of the pumping modules is a gerotor pumping module for rotation by the motor to pump fuel from the inlet to the outlet.
Coupled stochastic soil moisture simulation-optimization model of deficit irrigation
NASA Astrophysics Data System (ADS)
Alizadeh, Hosein; Mousavi, S. Jamshid
2013-07-01
This study presents an explicit stochastic optimization-simulation model of short-term deficit irrigation management for large-scale irrigation districts. The model which is a nonlinear nonconvex program with an economic objective function is built on an agrohydrological simulation component. The simulation component integrates (1) an explicit stochastic model of soil moisture dynamics of the crop-root zone considering interaction of stochastic rainfall and irrigation with shallow water table effects, (2) a conceptual root zone salt balance model, and 3) the FAO crop yield model. Particle Swarm Optimization algorithm, linked to the simulation component, solves the resulting nonconvex program with a significantly better computational performance compared to a Monte Carlo-based implicit stochastic optimization model. The model has been tested first by applying it in single-crop irrigation problems through which the effects of the severity of water deficit on the objective function (net benefit), root-zone water balance, and irrigation water needs have been assessed. Then, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. While the maximum net benefit has been obtained for a stress-avoidance (SA) irrigation policy, the highest water profitability has been resulted when only about 60% of the water used in the SA policy is applied. The DAID with respectively 33% of total cultivated area and 37% of total applied water has produced only 14% of the total net benefit due to low-valued crops and adverse soil and shallow water table conditions.
Removing Barriers for Effective Deployment of Intermittent Renewable Generation
NASA Astrophysics Data System (ADS)
Arabali, Amirsaman
The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.
NASA Astrophysics Data System (ADS)
Adams, Mike; Smalian, Silva
2017-09-01
For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
DOT National Transportation Integrated Search
2003-01-01
This study evaluated existing traffic signal optimization programs including Synchro,TRANSYT-7F, and genetic algorithm optimization using real-world data collected in Virginia. As a first step, a microscopic simulation model, VISSIM, was extensively ...
NASA Technical Reports Server (NTRS)
Cuthbert, Peter
2010-01-01
DTV-SIM is a computer program that implements a mathematical model of the flight dynamics of a missile-shaped drop test vehicle (DTV) equipped with a multistage parachute system that includes two simultaneously deployed drogue parachutes and three main parachutes deployed subsequently and simultaneously by use of pilot parachutes. DTV-SIM was written to support air-drop tests of the DTV/parachute system, which serves a simplified prototype of a proposed crew capsule/parachute landing system.
2004-04-15
The business end of a Second Stage (S-II) slowly emerges from the shipping container as workers prepare to transport the Saturn V component to the testing facility at MSFC. The Second Stage (S-II) underwent vibration and engine firing tests. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
1967-11-07
A technician checks the systems of the Saturn V instrument unit in a test facility in Huntsville. This instrument unit was flown aboard Apollo 4 on November 7, 1967, which was the first test flight of the Saturn V. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments
NASA Astrophysics Data System (ADS)
Kang, Yuncheol; Prabhu, Vittal
The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
Multiscale Hy3S: hybrid stochastic simulation for supercomputers.
Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2006-02-24
Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.
Direct Numerical Simulation of Turbulent Multi-Stage Autoignition Relevant to Engine Conditions
NASA Astrophysics Data System (ADS)
Chen, Jacqueline
2017-11-01
Due to the unrivaled energy density of liquid hydrocarbon fuels combustion will continue to provide over 80% of the world's energy for at least the next fifty years. Hence, combustion needs to be understood and controlled to optimize combustion systems for efficiency to prevent further climate change, to reduce emissions and to ensure U.S. energy security. In this talk I will discuss recent progress in direct numerical simulations of turbulent combustion focused on providing fundamental insights into key `turbulence-chemistry' interactions that underpin the development of next generation fuel efficient, fuel flexible engines for transportation and power generation. Petascale direct numerical simulation (DNS) of multi-stage mixed-mode turbulent combustion in canonical configurations have elucidated key physics that govern autoignition and flame stabilization in engines and provide benchmark data for combustion model development under the conditions of advanced engines which operate near combustion limits to maximize efficiency and minimize emissions. Mixed-mode combustion refers to premixed or partially-premixed flames propagating into stratified autoignitive mixtures. Multi-stage ignition refers to hydrocarbon fuels with negative temperature coefficient behavior that undergo sequential low- and high-temperature autoignition. Key issues that will be discussed include: 1) the role of mixing in shear driven turbulence on the dynamics of multi-stage autoignition and cool flame propagation in diesel environments, 2) the role of thermal and composition stratification on the evolution of the balance of mixed combustion modes - flame propagation versus spontaneous ignition - which determines the overall combustion rate in autoignition processes, and 3) the role of cool flames on lifted flame stabilization. Finally prospects for DNS of turbulent combustion at the exascale will be discussed in the context of anticipated heterogeneous machine architectures. sponsored by DOE Office of Basic Energy Sciences and computing resources provided by the Oakridge Leadership Computing Facility through the DOE INCITE Program.
NASA Technical Reports Server (NTRS)
Ramins, Peter; Force, Dale A.; Kosmahl, Henry G.
1987-01-01
A computational procedure for the design of traveling-wave-tube(TWT)/refocuser/multistage depressed collector (MDC) systems was used to design a short, permanent-magnet refocusing system and a highly efficient MDC for a medium-power, dual-mode, 4.8- to 9.6-GHz TWT. The computations were carried out with advanced, multidimensional computer programs which model the electron beam and follow the trajectories of representative charges from the radiofrequency (RF) input of the TWT, through the slow-wave structure and refocusing section, to their points of impact in the depressed collector. Secondary emission losses in the MDC were treated semiquantitatively by injecting representative secondary-electron-emission current into the MDA analysis at the point of impact of each primary beam. A comparison of computed and measured TWT and MDC performance showed very good agreement. The electrodes of the MDC were fabricated from a particluar form of isptropic graphite that was selected for its low secondary electron yield, ease of machinability, and vacuum properties.
Phase-I monitoring of standard deviations in multistage linear profiles
NASA Astrophysics Data System (ADS)
Kalaei, Mahdiyeh; Soleimani, Paria; Niaki, Seyed Taghi Akhavan; Atashgar, Karim
2018-03-01
In most modern manufacturing systems, products are often the output of some multistage processes. In these processes, the stages are dependent on each other, where the output quality of each stage depends also on the output quality of the previous stages. This property is called the cascade property. Although there are many studies in multistage process monitoring, there are fewer works on profile monitoring in multistage processes, especially on the variability monitoring of a multistage profile in Phase-I for which no research is found in the literature. In this paper, a new methodology is proposed to monitor the standard deviation involved in a simple linear profile designed in Phase I to monitor multistage processes with the cascade property. To this aim, an autoregressive correlation model between the stages is considered first. Then, the effect of the cascade property on the performances of three types of T 2 control charts in Phase I with shifts in standard deviation is investigated. As we show that this effect is significant, a U statistic is next used to remove the cascade effect, based on which the investigated control charts are modified. Simulation studies reveal good performances of the modified control charts.
Evaluation of Electric Power Procurement Strategies by Stochastic Dynamic Programming
NASA Astrophysics Data System (ADS)
Saisho, Yuichi; Hayashi, Taketo; Fujii, Yasumasa; Yamaji, Kenji
In deregulated electricity markets, the role of a distribution company is to purchase electricity from the wholesale electricity market at randomly fluctuating prices and to provide it to its customers at a given fixed price. Therefore the company has to take risk stemming from the uncertainties of electricity prices and/or demand fluctuation instead of the customers. The way to avoid the risk is to make a bilateral contact with generating companies or install its own power generation facility. This entails the necessity to develop a certain method to make an optimal strategy for electric power procurement. In such a circumstance, this research has the purpose for proposing a mathematical method based on stochastic dynamic programming and additionally considering the characteristics of the start-up cost of electric power generation facility to evaluate strategies of combination of the bilateral contract and power auto-generation with its own facility for procuring electric power in deregulated electricity market. In the beginning we proposed two approaches to solve the stochastic dynamic programming, and they are a Monte Carlo simulation method and a finite difference method to derive the solution of a partial differential equation of the total procurement cost of electric power. Finally we discussed the influences of the price uncertainty on optimal strategies of power procurement.
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.
2016-01-01
The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.
Economic consequences of paratuberculosis control in dairy cattle: A stochastic modeling study.
Smith, R L; Al-Mamun, M A; Gröhn, Y T
2017-03-01
The cost of paratuberculosis to dairy herds, through decreased milk production, early culling, and poor reproductive performance, has been well-studied. The benefit of control programs, however, has been debated. A recent stochastic compartmental model for paratuberculosis transmission in US dairy herds was modified to predict herd net present value (NPV) over 25 years in herds of 100 and 1000 dairy cattle with endemic paratuberculosis at initial prevalence of 10% and 20%. Control programs were designed by combining 5 tests (none, fecal culture, ELISA, PCR, or calf testing), 3 test-related culling strategies (all test-positive, high-positive, or repeated positive), 2 test frequencies (annual and biannual), 3 hygiene levels (standard, moderate, or improved), and 2 cessation decisions (testing ceased after 5 negative whole-herd tests or testing continued). Stochastic dominance was determined for each herd scenario; no control program was fully dominant for maximizing herd NPV in any scenario. Use of the ELISA test was generally preferred in all scenarios, but no paratuberculosis control was highly preferred for the small herd with 10% initial prevalence and was frequently preferred in other herd scenarios. Based on their effect on paratuberculosis alone, hygiene improvements were not found to be as cost-effective as test-and-cull strategies in most circumstances. Global sensitivity analysis found that economic parameters, such as the price of milk, had more influence on NPV than control program-related parameters. We conclude that paratuberculosis control can be cost effective, and multiple control programs can be applied for equivalent economic results. Copyright © 2017 Elsevier B.V. All rights reserved.
Using genetic algorithm to solve a new multi-period stochastic optimization model
NASA Astrophysics Data System (ADS)
Zhang, Xin-Li; Zhang, Ke-Cun
2009-09-01
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.
Chen, Cong; Zhu, Ying; Zeng, Xueting; Huang, Guohe; Li, Yongping
2018-07-15
Contradictions of increasing carbon mitigation pressure and electricity demand have been aggravated significantly. A heavy emphasis is placed on analyzing the carbon mitigation potential of electric energy systems via tradable green certificates (TGC). This study proposes a tradable green certificate (TGC)-fractional fuzzy stochastic robust optimization (FFSRO) model through integrating fuzzy possibilistic, two-stage stochastic and stochastic robust programming techniques into a linear fractional programming framework. The framework can address uncertainties expressed as stochastic and fuzzy sets, and effectively deal with issues of multi-objective tradeoffs between the economy and environment. The proposed model is applied to the major economic center of China, the Beijing-Tianjin-Hebei region. The generated results of proposed model indicate that a TGC mechanism is a cost-effective pathway to cope with carbon reduction and support the sustainable development pathway of electric energy systems. In detail, it can: (i) effectively promote renewable power development and reduce fossil fuel use; (ii) lead to higher CO 2 mitigation potential than non-TGC mechanism; and (iii) greatly alleviate financial pressure on the government to provide renewable energy subsidies. The TGC-FFSRO model can provide a scientific basis for making related management decisions of electric energy systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Control of Networked Traffic Flow Distribution - A Stochastic Distribution System Perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Aziz, H M Abdul; Young, Stan
Networked traffic flow is a common scenario for urban transportation, where the distribution of vehicle queues either at controlled intersections or highway segments reflect the smoothness of the traffic flow in the network. At signalized intersections, the traffic queues are controlled by traffic signal control settings and effective traffic lights control would realize both smooth traffic flow and minimize fuel consumption. Funded by the Energy Efficient Mobility Systems (EEMS) program of the Vehicle Technologies Office of the US Department of Energy, we performed a preliminary investigation on the modelling and control framework in context of urban network of signalized intersections.more » In specific, we developed a recursive input-output traffic queueing models. The queue formation can be modeled as a stochastic process where the number of vehicles entering each intersection is a random number. Further, we proposed a preliminary B-Spline stochastic model for a one-way single-lane corridor traffic system based on theory of stochastic distribution control.. It has been shown that the developed stochastic model would provide the optimal probability density function (PDF) of the traffic queueing length as a dynamic function of the traffic signal setting parameters. Based upon such a stochastic distribution model, we have proposed a preliminary closed loop framework on stochastic distribution control for the traffic queueing system to make the traffic queueing length PDF follow a target PDF that potentially realizes the smooth traffic flow distribution in a concerned corridor.« less
Do rational numbers play a role in selection for stochasticity?
Sinclair, Robert
2014-01-01
When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.
Stochastic dynamic programming illuminates the link between environment, physiology, and evolution.
Mangel, Marc
2015-05-01
I describe how stochastic dynamic programming (SDP), a method for stochastic optimization that evolved from the work of Hamilton and Jacobi on variational problems, allows us to connect the physiological state of organisms, the environment in which they live, and how evolution by natural selection acts on trade-offs that all organisms face. I first derive the two canonical equations of SDP. These are valuable because although they apply to no system in particular, they share commonalities with many systems (as do frictionless springs). After that, I show how we used SDP in insect behavioral ecology. I describe the puzzles that needed to be solved, the SDP equations we used to solve the puzzles, and the experiments that we used to test the predictions of the models. I then briefly describe two other applications of SDP in biology: first, understanding the developmental pathways followed by steelhead trout in California and second skipped spawning by Norwegian cod. In both cases, modeling and empirical work were closely connected. I close with lessons learned and advice for the young mathematical biologists.
Alvarado, Michelle; Ntaimo, Lewis
2018-03-01
Oncology clinics are often burdened with scheduling large volumes of cancer patients for chemotherapy treatments under limited resources such as the number of nurses and chairs. These cancer patients require a series of appointments over several weeks or months and the timing of these appointments is critical to the treatment's effectiveness. Additionally, the appointment duration, the acuity levels of each appointment, and the availability of clinic nurses are uncertain. The timing constraints, stochastic parameters, rising treatment costs, and increased demand of outpatient oncology clinic services motivate the need for efficient appointment schedules and clinic operations. In this paper, we develop three mean-risk stochastic integer programming (SIP) models, referred to as SIP-CHEMO, for the problem of scheduling individual chemotherapy patient appointments and resources. These mean-risk models are presented and an algorithm is devised to improve computational speed. Computational results were conducted using a simulation model and results indicate that the risk-averse SIP-CHEMO model with the expected excess mean-risk measure can decrease patient waiting times and nurse overtime when compared to deterministic scheduling algorithms by 42 % and 27 %, respectively.
Classification and disease prediction via mathematical programming
NASA Astrophysics Data System (ADS)
Lee, Eva K.; Wu, Tsung-Lin
2007-11-01
In this chapter, we present classification models based on mathematical programming approaches. We first provide an overview on various mathematical programming approaches, including linear programming, mixed integer programming, nonlinear programming and support vector machines. Next, we present our effort of novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule) and (5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multigroup prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; multistage discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular degeneracy and tumor metastasis; prediction of protein localization sites; and pattern recognition of satellite images in classification of soil types. In all these applications, the predictive model yields correct classification rates ranging from 80% to 100%. This provides motivation for pursuing its use as a medical diagnostic, monitoring and decision-making tool.
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
Economic efficiency and risk character of fire management programs, Northern Rocky Mountains
Thomas J. Mills; Frederick W. Bratten
1988-01-01
Economic efficiency and risk have long been considered during the selection of fire management programs and the design of fire management polices. The risk considerations was largely subjective, however, and efficiency has only recently been calculated for selected portions of the fire management program. The highly stochastic behavior of the fire system and the high...
Learning abstract visual concepts via probabilistic program induction in a Language of Thought.
Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T
2017-11-01
The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.
1966-09-15
This vintage photograph shows the 138-foot long first stage of the Saturn V being lowered to the ground following a successful static test firing at Marshall Space flight Center's S-1C test stand. The firing provided NASA engineers information on the booster's systems. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
1960-01-01
This photograph shows the Saturn V assembled LOX (Liquid Oxygen) and fuel tanks ready for transport from the Manufacturing Engineering Laboratory at Marshall Space Flight Center in Huntsville, Alabama. The tanks were then shipped to the launch site at Kennedy Space Center for a flight. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
Environmental monitoring for the DOE coolside and LIMB demonstration extension projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, T.; Contos, L.; Adams, L.
1992-02-01
The purpose of this document is to present environmental monitoring data collected during the US DOE Limestone Injection Multistage Burner (LIMB) Demonstration Project Extension. The objective of the LIMB program is to demonstrate the sulfur dioxide (SO{sub 2}) and nitrogen oxide (NO{sub x}) emission reduction capabilities of the LIMB system. The LIMB system is a retrofit technology to be used for existing coal-fired boilers equipped with electrostatic precipitators. (VC)
MCdevelop - a universal framework for Stochastic Simulations
NASA Astrophysics Data System (ADS)
Slawinska, M.; Jadach, S.
2011-03-01
We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 136 No. of bytes in distributed program, including test data, etc.: 355 698 Distribution format: tar.gz Programming language: ANSI C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system. Operating system: Most UNIX systems, Linux. The application programs were thoroughly tested under Ubuntu 7.04, 8.04 and CERN Scientific Linux 5. Has the code been vectorised or parallelised?: Tools (scripts) for optional parallelisation on a PC farm are included. RAM: 500 bytes Classification: 11.3 External routines: ROOT package version 5.0 or higher ( http://root.cern.ch/drupal/). Nature of problem: Developing any type of stochastic simulation program for high energy physics and other areas. Solution method: Object Oriented programming in C++ with added persistency mechanism, batch scripts for running on PC farms and Autotools.
The sequence relay selection strategy based on stochastic dynamic programming
NASA Astrophysics Data System (ADS)
Zhu, Rui; Chen, Xihao; Huang, Yangchao
2017-07-01
Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.
Factors leading to different viability predictions for a grizzly bear data set
Mills, L.S.; Hayes, S.G.; Wisdom, M.J.; Citta, J.; Mattson, D.J.; Murphy, K.
1996-01-01
Population viability analysis programs are being used increasingly in research and management applications, but there has not been a systematic study of the congruence of different program predictions based on a single data set. We performed such an analysis using four population viability analysis computer programs: GAPPS, INMAT, RAMAS/AGE, and VORTEX. The standardized demographic rates used in all programs were generalized from hypothetical increasing and decreasing grizzly bear (Ursus arctos horribilis) populations. Idiosyncracies of input format for each program led to minor differences in intrinsic growth rates that translated into striking differences in estimates of extinction rates and expected population size. In contrast, the addition of demographic stochasticity, environmental stochasticity, and inbreeding costs caused only a small divergence in viability predictions. But, the addition of density dependence caused large deviations between the programs despite our best attempts to use the same density-dependent functions. Population viability programs differ in how density dependence is incorporated, and the necessary functions are difficult to parameterize accurately. Thus, we recommend that unless data clearly suggest a particular density-dependent model, predictions based on population viability analysis should include at least one scenario without density dependence. Further, we describe output metrics that may differ between programs; development of future software could benefit from standardized input and output formats across different programs.
Ennis, Erin J; Foley, Joe P
2016-07-15
A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach
Simulation of multistage turbine flows
NASA Technical Reports Server (NTRS)
Adamczyk, John J.; Mulac, Richard A.
1987-01-01
A flow model has been developed for analyzing multistage turbomachinery flows. This model, referred to as the average passage flow model, describes the time-averaged flow field with a typical passage of a blade row embedded within a multistage configuration. Computer resource requirements, supporting empirical modeling, formulation code development, and multitasking and storage are discussed. Illustrations from simulations of the space shuttle main engine (SSME) fuel turbine performed to date are given.
NASA Astrophysics Data System (ADS)
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-07-01
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-07-13
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
Solving a Class of Stochastic Mixed-Integer Programs With Branch and Price
2006-01-01
a two-dimensional knapsack problem, but for a given m, the objective value gi does not depend on the variance index v. This will be used in a final...optimization. Journal of Multicriteria Decision Analysis 11, 139–150 (2002) 29. Ford, L.R., Fulkerson, D.R.: A suggested computation for the maximal...for solution by a branch-and-price algorithm (B&P). We then survey a number of examples, and use a stochastic facility-location problem (SFLP) for a
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
NASA Astrophysics Data System (ADS)
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
Casein Kinase II Regulation of the Hot1 Transcription Factor Promotes Stochastic Gene Expression*
Burns, Laura T.; Wente, Susan R.
2014-01-01
In Saccharomyces cerevisiae, Hog1 MAPK is activated and induces a transcriptional program in response to hyperosmotic stress. Several Hog1-responsive genes exhibit stochastic transcription, resulting in cell-to-cell variability in mRNA and protein levels. However, the mechanisms governing stochastic gene activity are not fully defined. Here we uncover a novel role for casein kinase II (CK2) in the cellular response to hyperosmotic stress. CK2 interacts with and phosphorylates the Hot1 transcription factor; however, Hot1 phosphorylation is not sufficient for controlling the stochastic response. The CK2 protein itself is required to negatively regulate mRNA expression of Hot1-responsive genes and Hot1 enrichment at target promoters. Single-cell gene expression analysis reveals altered activation of Hot1-targeted STL1 in ck2 mutants, resulting in a bimodal to unimodal shift in expression. Together, this work reveals a novel CK2 function during the hyperosmotic stress response that promotes cell-to-cell variability in gene expression. PMID:24817120
Solution of Stochastic Capital Budgeting Problems in a Multidivisional Firm.
1980-06-01
linear programming with simple recourse (see, for example, Dantzig (9) or Ziemba (35)) - 12 - and has been applied to capital budgeting problems with...New York, 1972 34. Weingartner, H.M., Mathematical Programming and Analysis of Capital Budgeting Problems, Markham Pub. Co., Chicago, 1967 35. Ziemba
Deep Vein Thrombosis After Complex Posterior Spine Surgery: Does Staged Surgery Make a Difference?
Edwards, Charles C; Lessing, Noah L; Ford, Lisa; Edwards, Charles C
Retrospective review of a prospectively collected database. To assess the incidence of deep vein thrombosis (DVT) associated with single- versus multistage posterior-only complex spinal surgeries. Dividing the physiologic burden of spinal deformity surgery into multiple stages has been suggested as a potential means of reducing perioperative complications. DVT is a worrisome complication owing to its potential to lead to pulmonary embolism. Whether or not staging affects DVT incidence in this population is unknown. Consecutive patients undergoing either single- or multistage posterior complex spinal surgeries over a 12-year period at a single institution were eligible. All patients received lower extremity venous duplex ultrasonographic (US) examinations 2 to 4 days postoperatively in the single-stage group and 2 to 4 days postoperatively after each stage in the multistage group. Multivariate logistic regression was used to assess the independent contribution of staging to developing a DVT. A total of 107 consecutive patients were enrolled-26 underwent multistage surgery and 81 underwent single-stage surgery. The single-stage group was older (63 years vs. 45 years; p < .01) and had a higher Charlson comorbidity index (2.25 ± 1.27 vs. 1.23 ± 1.58; p < .01). More multistage patients had positive US tests than single-stage patients (5 of 26 vs. 6 of 81; 19% vs. 7%; p = .13). Adjusting for all the above-mentioned covariates, a multistage surgery was 8.17 (95% CI 0.35-250.6) times more likely to yield a DVT than a single-stage surgery. Patients who undergo multistage posterior complex spine surgery are at a high risk for developing a DVT compared to those who undergo single-stage procedures. The difference in DVT incidence may be understated as the multistage group had a lower pre- and intraoperative risk profile with a younger age, lower medical comorbidities, and less per-stage blood loss. Copyright © 2017 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Investigation of air transportation technology at Princeton University, 1990-1991
NASA Technical Reports Server (NTRS)
Stengel, Robert F.
1991-01-01
The Air Transportation Technology Program at Princeton University is a program that emphasizes graduate and undergraduate student research. The program proceeded along six avenues during the past year: microburst hazards to aircraft, intelligent failure tolerant control, computer-aided heuristics for piloted flight, stochastic robustness of flight control systems, neural networks for flight control, and computer-aided control system design.
Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu; Jablonowski, Christopher; Lake, Larry
Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum designmore » concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.« less
Stochastic kinetic mean field model
NASA Astrophysics Data System (ADS)
Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.
2016-07-01
This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.
Boore, David M.
2000-01-01
A simple and powerful method for simulating ground motions is based on the assumption that the amplitude of ground motion at a site can be specified in a deterministic way, with a random phase spectrum modified such that the motion is distributed over a duration related to the earthquake magnitude and to distance from the source. This method of simulating ground motions often goes by the name "the stochastic method." It is particularly useful for simulating the higher-frequency ground motions of most interest to engineers, and it is widely used to predict ground motions for regions of the world in which recordings of motion from damaging earthquakes are not available. This simple method has been successful in matching a variety of ground-motion measures for earthquakes with seismic moments spanning more than 12 orders of magnitude. One of the essential characteristics of the method is that it distills what is known about the various factors affecting ground motions (source, path, and site) into simple functional forms that can be used to predict ground motions. SMSIM is a set of programs for simulating ground motions based on the stochastic method. This Open-File Report is a revision of an earlier report (Boore, 1996) describing a set of programs for simulating ground motions from earthquakes. The programs are based on modifications I have made to the stochastic method first introduced by Hanks and McGuire (1981). The report contains source codes, written in Fortran, and executables that can be used on a PC. Programs are included both for time-domain and for random vibration simulations. In addition, programs are included to produce Fourier amplitude spectra for the models used in the simulations and to convert shear velocity vs. depth into frequency-dependent amplification. The revision to the previous report is needed because the input and output files have changed significantly, and a number of new programs have been included in the set.
Effluent trading in river systems through stochastic decision-making process: a case study.
Zolfagharipoor, Mohammad Amin; Ahmadi, Azadeh
2017-09-01
The objective of this paper is to provide an efficient framework for effluent trading in river systems. The proposed framework consists of two pessimistic and optimistic decision-making models to increase the executability of river water quality trading programs. The models used for this purpose are (1) stochastic fallback bargaining (SFB) to reach an agreement among wastewater dischargers and (2) stochastic multi-criteria decision-making (SMCDM) to determine the optimal treatment strategy. The Monte-Carlo simulation method is used to incorporate the uncertainty into analysis. This uncertainty arises from stochastic nature and the errors in the calculation of wastewater treatment costs. The results of river water quality simulation model are used as the inputs of models. The proposed models are used in a case study on the Zarjoub River in northern Iran to determine the best solution for the pollution load allocation. The best treatment alternatives selected by each model are imported, as the initial pollution discharge permits, into an optimization model developed for trading of pollution discharge permits among pollutant sources. The results show that the SFB-based water pollution trading approach reduces the costs by US$ 14,834 while providing a relative consensus among pollutant sources. Meanwhile, the SMCDM-based water pollution trading approach reduces the costs by US$ 218,852, but it is less acceptable by pollutant sources. Therefore, it appears that giving due attention to stability, or in other words acceptability of pollution trading programs for all pollutant sources, is an essential element of their success.
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Optimizing integrated airport surface and terminal airspace operations under uncertainty
NASA Astrophysics Data System (ADS)
Bosson, Christabelle S.
In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is performed for the Los Angeles environment and probabilistic distributions of pertinent uncertainty sources are obtained. A sensitivity analysis is then carried out to assess the methodology performance and find optimal sampling parameters. Finally, simulations of increasing traffic density in the presence of uncertainty are conducted first for integrated arrivals and departures, then for integrated surface and air operations. To compare the optimization results and show the benefits of integrated operations, two aircraft separation methods are implemented that offer different routing options. The simulations of integrated air operations and the simulations of integrated air and surface operations demonstrate that significant traveling time savings, both total and individual surface and air times, can be obtained when more direct routes are allowed to be traveled even in the presence of uncertainty. The resulting routings induce however extra take off delay for departing flights. As a consequence, some flights cannot meet their initial assigned runway slot which engenders runway position shifting when comparing resulting runway sequences computed under both deterministic and stochastic conditions. The optimization is able to compute an optimal runway schedule that represents an optimal balance between total schedule delays and total travel times.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Xi, Beidou; He, Xiaosong; Dang, Qiuling; Yang, Tianxue; Li, Mingxiao; Wang, Xiaowei; Li, Dan; Tang, Jun
2015-11-01
In this study, PCR-DGGE method was applied to investigate the impact of multi-stage inoculation treatment on the community composition of bacterial and fungal during municipal solid wastes (MSW) composting process. The results showed that the high temperature period was extended by the multi-stage inoculation treatment, 1day longer than initial-stage inoculation treatment, and 5days longer than non-inoculation treatment. The temperature of the secondary fermentation increased to 51°C with multi-stage inoculation treatment. The multi-stage inoculation method improved the community diversity of bacteria and fungi that the diversity indexes reached the maximum on the 17days and 20days respectively, avoided the competition between inoculations and indigenous microbes, and enhanced the growth of dominant microorganisms. The DNA sequence indicated that various kinds of uncultured microorganisms with determined ratios were detected, which were dominant microbes during the whole fermentation process. These findings call for further researches of compost microbial cultivation technology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bio-inspired approach to multistage image processing
NASA Astrophysics Data System (ADS)
Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan
2017-08-01
Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.
Annual Review of Research Under the Joint Services Electronics Program.
1978-10-01
Electronic Science at Texas Tech University. Specific topics covered include fault analysis, Stochastic control and estimation, nonlinear control, multidimensional system theory , Optical noise, and pattern recognition.
Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis
2015-01-01
We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework. PMID:26543899
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.
A Multi-Area Stochastic Model for a Covert Visual Search Task.
Schwemmer, Michael A; Feng, Samuel F; Holmes, Philip J; Gottlieb, Jacqueline; Cohen, Jonathan D
2015-01-01
Decisions typically comprise several elements. For example, attention must be directed towards specific objects, their identities recognized, and a choice made among alternatives. Pairs of competing accumulators and drift-diffusion processes provide good models of evidence integration in two-alternative perceptual choices, but more complex tasks requiring the coordination of attention and decision making involve multistage processing and multiple brain areas. Here we consider a task in which a target is located among distractors and its identity reported by lever release. The data comprise reaction times, accuracies, and single unit recordings from two monkeys' lateral interparietal area (LIP) neurons. LIP firing rates distinguish between targets and distractors, exhibit stimulus set size effects, and show response-hemifield congruence effects. These data motivate our model, which uses coupled sets of leaky competing accumulators to represent processes hypothesized to occur in feature-selective areas and limb motor and pre-motor areas, together with the visual selection process occurring in LIP. Model simulations capture the electrophysiological and behavioral data, and fitted parameters suggest that different connection weights between LIP and the other cortical areas may account for the observed behavioral differences between the animals.
Chen, Xiujuan; Huang, Guohe; Zhao, Shan; Cheng, Guanhui; Wu, Yinghui; Zhu, Hua
2017-11-01
In this study, a stochastic fractional inventory-theory-based waste management planning (SFIWP) model was developed and applied for supporting long-term planning of the municipal solid waste (MSW) management in Xiamen City, the special economic zone of Fujian Province, China. In the SFIWP model, the techniques of inventory model, stochastic linear fractional programming, and mixed-integer linear programming were integrated in a framework. Issues of waste inventory in MSW management system were solved, and the system efficiency was maximized through considering maximum net-diverted wastes under various constraint-violation risks. Decision alternatives for waste allocation and capacity expansion were also provided for MSW management planning in Xiamen. The obtained results showed that about 4.24 × 10 6 t of waste would be diverted from landfills when p i is 0.01, which accounted for 93% of waste in Xiamen City, and the waste diversion per unit of cost would be 26.327 × 10 3 t per $10 6 . The capacities of MSW management facilities including incinerators, composting facility, and landfills would be expanded due to increasing waste generation rate.
Configuration of management accounting information system for multi-stage manufacturing
NASA Astrophysics Data System (ADS)
Mkrtychev, S. V.; Ochepovsky, A. V.; Enik, O. A.
2018-05-01
The article presents an approach to configuration of a management accounting information system (MAIS) that provides automated calculations and the registration of normative production losses in multi-stage manufacturing. The use of MAIS with the proposed configuration at the enterprises of textile and woodworking industries made it possible to increase the accuracy of calculations for normative production losses and to organize accounting thereof with the reference to individual stages of the technological process. Thus, high efficiency of multi-stage manufacturing control is achieved.
Multi-megavolt low jitter multistage switch
Humphreys, D.R.; Penn, K.J. Jr.
1985-06-19
It is one object of the present invention to provide a multistage switch capable of holding off numerous megavolts, until triggered, from a particle beam accelerator of the type used for inertial confinement fusion. The invention provides a multistage switch having low timing jitter and capable of producing multiple spark channels for spreading current over a wider area to reduce electrode damage and increase switch lifetime. The switch has fairly uniform electric fields and a short spark gap for laser triggering and is engineered to prevent insulator breakdowns.
NASA Technical Reports Server (NTRS)
Gialdini, M.; Titus, S. J.; Nichols, J. D.; Thomas, R.
1975-01-01
An approach to information acquisition is discussed in the context of meeting user-specified needs in a cost-effective, timely manner through the use of remote sensing data, ground data, and multistage sampling techniques. The roles of both LANDSAT imagery and Skylab photography are discussed as first stages of three separate multistage timber inventory systems and results are given for each system. Emphasis is placed on accuracy and meeting user needs.
Unsteady Aero Computation of a 1 1/2 Stage Large Scale Rotating Turbine
NASA Technical Reports Server (NTRS)
To, Wai-Ming
2012-01-01
This report is the documentation of the work performed for the Subsonic Rotary Wing Project under the NASA s Fundamental Aeronautics Program. It was funded through Task Number NNC10E420T under GESS-2 Contract NNC06BA07B in the period of 10/1/2010 to 8/31/2011. The objective of the task is to provide support for the development of variable speed power turbine technology through application of computational fluid dynamics analyses. This includes work elements in mesh generation, multistage URANS simulations, and post-processing of the simulation results for comparison with the experimental data. The unsteady CFD calculations were performed with the TURBO code running in multistage single passage (phase lag) mode. Meshes for the blade rows were generated with the NASA developed TCGRID code. The CFD performance is assessed and improvements are recommended for future research in this area. For that, the United Technologies Research Center's 1 1/2 stage Large Scale Rotating Turbine was selected to be the candidate engine configuration for this computational effort because of the completeness and availability of the data.
Ifoulis, A A; Savopoulou-Soultani, M
2006-10-01
The purpose of this research was to quantify the spatial pattern and develop a sampling program for larvae of Lobesia botrana Denis and Schiffermüller (Lepidoptera: Tortricidae), an important vineyard pest in northern Greece. Taylor's power law and Iwao's patchiness regression were used to model the relationship between the mean and the variance of larval counts. Analysis of covariance was carried out, separately for infestation and injury, with combined second and third generation data, for vine and half-vine sample units. Common regression coefficients were estimated to permit use of the sampling plan over a wide range of conditions. Optimum sample sizes for infestation and injury, at three levels of precision, were developed. An investigation of a multistage sampling plan with a nested analysis of variance showed that if the goal of sampling is focusing on larval infestation, three grape clusters should be sampled in a half-vine; if the goal of sampling is focusing on injury, then two grape clusters per half-vine are recommended.
NASA Astrophysics Data System (ADS)
Luo, K.; Sun, D. M.; Zhang, J.; Shen, Q.; Zhang, N.
2017-12-01
This study proposes a multi-stage travelling-wave thermoacoustically refrigeration system (TAD-RS) operating at liquefied natural gas temperature, which consists of two thermoacoustic engines (TAE) and one thermoacoustic refrigerator (TAR) in a closed-loop configuration. Three thermoacoustic units connect each other through a resonance tube of small cross-sectional area, achieving “self-matching” for efficient thermoacoustic conversion. Based on the linear thermoacoustic theory, a model of the proposed system has been built by using DeltaEC program to show the acoustic field characteristics and performance. It is shown that with pressurized 5 MPa helium as working gas, the TAEs are able to build a stable and strong acoustic field with a frequency of about 85 Hz. When hot end temperature reaches 923 K, this system can provide about 1410 W cooling power at 110 K with an overall exergy efficiency of 15.5%. This study indicates a great application prospect of TAD-RS in the field of natural gas liquefaction with a large cooling capacity and simple structure.
Software Tools for Stochastic Simulations of Turbulence
2015-08-28
client interface to FTI. Specefic client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH...client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH; and two locally constructed fluid...45 4.4.2.2 FLASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4.2.3 WRF
NASA Astrophysics Data System (ADS)
Mishra, Bhavya; Schütz, Gunter M.; Chowdhury, Debashish
2016-06-01
We develop a stochastic model for the programmed frameshift of ribosomes synthesizing a protein while moving along a mRNA template. Normally the reading frame of a ribosome decodes successive triplets of nucleotides on the mRNA in a step-by-step manner. We focus on the programmed shift of the ribosomal reading frame, forward or backward, by only one nucleotide which results in a fusion protein; it occurs when a ribosome temporarily loses its grip to its mRNA track. Special “slippery” sequences of nucleotides and also downstream secondary structures of the mRNA strand are believed to play key roles in programmed frameshift. Here we explore the role of an hitherto neglected parameter in regulating -1 programmed frameshift. Specifically, we demonstrate that the frameshift frequency can be strongly regulated also by the density of the ribosomes, all of which are engaged in simultaneous translation of the same mRNA, at and around the slippery sequence. Monte Carlo simulations support the analytical predictions obtained from a mean-field analysis of the stochastic dynamics.
Study of controlled diffusion stator blading. 1. Aerodynamic and mechanical design report
NASA Technical Reports Server (NTRS)
Canal, E.; Chisholm, B. C.; Lee, D.; Spear, D. A.
1981-01-01
Pratt & Whitney Aircraft is conducting a test program for NASA in order to demonstrate that a controlled-diffusion stator provides low losses at high loadings and Mach numbers. The technology has shown great promise in wind tunnel tests. Details of the design of the controlled diffusion stator vanes and the multiple-circular-arc rotor blades are presented. The stage, including stator and rotor, was designed to be suitable for the first-stage of an advanced multistage, high-pressure compressor.
Pyrolytic graphite collector development program
NASA Technical Reports Server (NTRS)
Wilkins, W. J.
1982-01-01
Pyrolytic graphite promises to have significant advantages as a material for multistage depressed collector electrodes. Among these advantages are lighter weight, improved mechanical stiffness under shock and vibration, reduced secondary electron back-streaming for higher efficiency, and reduced outgassing at higher operating temperatures. The essential properties of pyrolytic graphite and the necessary design criteria are discussed. This includes the study of suitable electrode geometries and methods of attachment to other metal and ceramic collector components consistent with typical electrical, thermal, and mechanical requirements.
1965-08-01
Two workers are dwarfed by the five J-2 engines of the Saturn V second stage (S-II) as they make final inspections prior to a static test firing by North American Space Division. These five hydrogen -fueled engines produced one million pounds of thrust, and placed the Apollo spacecraft into earth orbit before departing for the moon. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, T.; Contos, L.; Adams, L.
1992-02-01
The purpose of this document is to present environmental monitoring data collected during the US DOE Limestone Injection Multistage Burner (LIMB) Demonstration Project Extension. The objective of the LIMB program is to demonstrate the sulfur dioxide (SO{sub 2}) and nitrogen oxide (NO{sub x}) emission reduction capabilities of the LIMB system. The LIMB system is a retrofit technology to be used for existing coal-fired boilers equipped with electrostatic precipitators. (VC)
Performance Evaluation of Parallel Algorithms and Architectures in Concurrent Multiprocessor Systems
1988-09-01
HEP and Other Parallel processors, Report no. ANL-83-97, Argonne National Laboratory, Argonne, Ill. 1983. [19] Davidson, G . S. A Practical Paradigm for...IEEE Comp. Soc., 1986. [241 Peir, Jih-kwon, and D. Gajski , "CAMP: A Programming Aide For Multiprocessors," Proc. 1986 ICPP, IEEE Comp. Soc., pp475...482. [251 Pfister, G . F., and V. A. Norton, "Hot Spot Contention and Combining in Multistage Interconnection Networks,"IEEE Trans. Comp., C-34, Oct
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
Area of Stochastic Scrape-Off Layer for a Single-Null Divertor Tokamak Using Simple Map
NASA Astrophysics Data System (ADS)
Fisher, Tiffany; Verma, Arun; Punjabi, Alkesh
1996-11-01
The magnetic topology of a single-null divertor tokamak is represented by Simple Map (Punjabi A, Verma A and Boozer A, Phys Rev Lett), 69, 3322 (1992) and J Plasma Phys, 52, 91 (1994). The Simple map is characterized by a single parameter k representing the toroidal asymmetry. The width of the stochastic scrape-off layer and its area varies with the map parameter k. We calculate the area of the stochastic scrape-off layer for different k's and obtain a parametric expression for the area in terms of k and y _LastGoodSurface(k). This work is supported by US DOE OFES. Tiffany Fisher is a HU CFRT Summer Fusion High school Workshop Scholar from New Bern High School in North Carolina. She is supported by NASA SHARP Plus Program.
Stochastic Evolutionary Algorithms for Planning Robot Paths
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard
2006-01-01
A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.
Exploring information transmission in gene networks using stochastic simulation and machine learning
NASA Astrophysics Data System (ADS)
Park, Kyemyung; Prüstel, Thorsten; Lu, Yong; Narayanan, Manikandan; Martins, Andrew; Tsang, John
How gene regulatory networks operate robustly despite environmental fluctuations and biochemical noise is a fundamental question in biology. Mathematically the stochastic dynamics of a gene regulatory network can be modeled using chemical master equation (CME), but nonlinearity and other challenges render analytical solutions of CMEs difficult to attain. While approaches of approximation and stochastic simulation have been devised for simple models, obtaining a more global picture of a system's behaviors in high-dimensional parameter space without simplifying the system substantially remains a major challenge. Here we present a new framework for understanding and predicting the behaviors of gene regulatory networks in the context of information transmission among genes. Our approach uses stochastic simulation of the network followed by machine learning of the mapping between model parameters and network phenotypes such as information transmission behavior. We also devised ways to visualize high-dimensional phase spaces in intuitive and informative manners. We applied our approach to several gene regulatory circuit motifs, including both feedback and feedforward loops, to reveal underexplored aspects of their operational behaviors. This work is supported by the Intramural Program of NIAID/NIH.
Adaptation of Decoy Fusion Strategy for Existing Multi-Stage Search Workflows
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Levitsky, Lev I.; Gorshkov, Mikhail V.
2016-09-01
A number of proteomic database search engines implement multi-stage strategies aiming at increasing the sensitivity of proteome analysis. These approaches often employ a subset of the original database for the secondary stage of analysis. However, if target-decoy approach (TDA) is used for false discovery rate (FDR) estimation, the multi-stage strategies may violate the underlying assumption of TDA that false matches are distributed uniformly across the target and decoy databases. This violation occurs if the numbers of target and decoy proteins selected for the second search are not equal. Here, we propose a method of decoy database generation based on the previously reported decoy fusion strategy. This method allows unbiased TDA-based FDR estimation in multi-stage searches and can be easily integrated into existing workflows utilizing popular search engines and post-search algorithms.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1991-01-01
In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
Caregiver Expectations of Family-based Pediatric Obesity Treatment.
Giannini, Courtney; Irby, Megan B; Skelton, Joseph A
2015-07-01
To explore caregivers' expectations of pediatric weight management prior to starting treatment. Interviews conducted with 25 purposefully selected caregivers of children, ages 8-12 years, waiting to begin 4 different weight management programs. Interviews were conducted and recorded via telephone and coded using a multistage inductive approach. Caregivers listed specific motivators for seeking treatment that did not often align with clinical measures of success: caregivers perceived child's socio-emotional health improvement to be an important success measure. Caregivers understood the program's approach, but were unsure of the commitment required. Caregivers were confident they would complete treatment but not in being successful. Caregivers' expectations of treatment success and their role in treatment may be a hindrance to adherence.
Chance Constrained Programming Methods in Probabilistic Programming.
1982-03-01
Financial and Quantitative Analysis 2, 1967. Also reproduced in R. F. Byrne et. al., eds.5tudies in Budgeting (Amsterdam: North Holland, 1971 ). [3...Rules for the E-Model of Chance-Constrained Programming," Management Science, 17, 1971 . [23] Garstka, S. J. "The Economic Equivalence of Several...Iowa City: The University of Iowa College of Business Administration, 1981). -3- (29] Kall , P. and A. Prekopa, eds, Recent Results in Stochastic
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.
2016-01-01
Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.
Multistage Estimation Of Frequency And Phase
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1991-01-01
Conceptual two-stage software scheme serves as prototype of multistage scheme for digital estimation of phase, frequency, and rate of change of frequency ("Doppler rate") of possibly phase-modulated received sinusoidal signal in communication system in which transmitter and/or receiver traveling rapidly, accelerating, and/or jerking severely. Each additional stage of multistage scheme provides increasingly refined estimate of frequency and phase of signal. Conceived for use in estimating parameters of signals from spacecraft and high dynamic GPS signal parameters, also applicable, to terrestrial stationary/mobile (e.g., cellular radio) and land-mobile/satellite communication systems.
Xu, Y H; Dragan, Y P; Campbell, H A; Pitot, H C
1998-04-01
The most common organ site of neoplasms induced by carcinogenic chemicals in the rodent bioassay is the liver. The development of cancer in rodent liver is a multistage process involving sequentially the stages of initiation, promotion, and progression. During the stages of promotion and progression, numerous lesions termed altered hepatic foci (AHF) develop. STEREO was developed for the purpose of efficient and accurate quantitation of AHF and related lesions in experimental and test rodents. The system utilized is equipped with a microcomputer (IBM-compatible PC running Windows 95) and a Summagraphics MICROGRID or SummaSketch tablet digitizer. The program records information from digitization of single or serial sections obtained randomly from rat liver tissue. With this information and the methods of quantitative stereology, both the number and volume percentage fraction of AHF in liver are calculated in three dimensions. The recorded data files can be printed graphically or in the format of tabular numerical data. The results of stereologic calculations are stored on floppy disks and can be sorted into different categories and analyzed or displayed with the use of statistics and graphic functions built into the overall program. Results may also be exported into Microsoft Excel for use at a later time. Any IBM-compatible PC capable of utilizing Windows 95 and MS Office can be used with STEREO, which offers inexpensive, easily operated software to obtain three-dimensional information from sections of two dimensions for the identification and relative potency of initiators, promoters, and progressors, and for the establishment of information potentially useful in developing estimations of risk for human cancer.
Doktorov, Alexander B
2016-08-28
Manifestations of the "cage" effect at the encounters of reactants have been theoretically treated on the example of multistage reactions (including bimolecular exchange reactions as elementary stages) proceeding from different active sites in liquid solutions. It is shown that for reactions occurring near the contact of reactants, consistent consideration of quasi-stationary kinetics of such multistage reactions (possible in the framework of the encounter theory only) can be made on the basis of chemical concepts of the "cage complex," just as in the case of one-site model described in the literature. Exactly as in the one-site model, the presence of the "cage" effect gives rise to new channels of reactant transformation that cannot result from elementary event of chemical conversion for the given reaction mechanism. Besides, the multisite model demonstrates new (as compared to one-site model) features of multistage reaction course.
Trubitsyn, A G
2009-01-01
The age-dependent degradation of all vital processes of an organism can be result of influences of destructive factors (the stochastic mechanism of aging), or effect of realizations of the genetic program (phenoptosis). The stochastic free-radical theory of aging dominating now contradicts the set of empirical data, and the semicentenial attempts to create the means to slow down aging did not give any practical results. It makes obvious that the stochastic mechanism of aging is incorrect. At the same time, the alternative mechanism of the programmed aging is not developed yet but preconditions for it development have already been created. It is shown that the genes controlling process of aging exist (contrary to the customary opinion) and the increase in the level of damaged macromolecules (basic postulate of the free-radical theory) can be explained by programmed attenuation of bio-energetics. As the bio-energetics is a driving force of all vital processes, decrease of its level is capable to cause degradation of all functions of an organism. However to transform this postulate into a basis of the theory of phenoptosis it is necessary to show, that attenuation of bio-energetics predetermines such fundamental processes accompanying aging as decrease of the overall rate of protein biosynthesis, restriction of cellular proliferations (Hayflick limit), loss of telomeres etc. This article is the first step in this direction: the natural mechanism of interaction of overall rate of protein synthesis with a level of cellular bio-energetics is shown. This is built-in into the translation machine and based on dependence of recirculation rate of eukaryotic initiation factor 2 (elF2) from ATP/ADP value that is created by mitochondrial bio-energetic machine.
Multiobjective optimization in structural design with uncertain parameters and stochastic processes
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.
Three essays on multi-level optimization models and applications
NASA Astrophysics Data System (ADS)
Rahdar, Mohammad
The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation problem in each node and decreasing the number of iterations. Computational experiments show that the proposed algorithm is faster than the existing ones.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
Xie, Zhi-Peng; Liu, Xue-Song; Chen, Yong; Cai, Ming; Qu, Hai-Bin; Cheng, Yi-Yu
2007-05-01
Multi-stage countercurrent extraction technology, integrating solvent extraction, repercolation with dynamic and countercurrent extraction, is a novel extraction technology for the traditional Chinese medicine. This solvent-saving, energy-saving and high-extraction-efficiency technology can at the most drive active compounds to diffuse from the herbal materials into the solvent stage by stage by creating concentration differences between the herbal materials and the solvents. This paper reviewed the basic principle, the influence factors and the research progress and trends of the equipments and the application of the multi-stage countercurrent extraction.
Full 3D Analysis of the GE90 Turbofan Primary Flowpath
NASA Technical Reports Server (NTRS)
Turner, Mark G.
2000-01-01
The multistage simulations of the GE90 turbofan primary flowpath components have been performed. The multistage CFD code, APNASA, has been used to analyze the fan, fan OGV and booster, the 10-stage high-pressure compressor and the entire turbine system of the GE90 turbofan engine. The code has two levels of parallel, and for the 18 blade row full turbine simulation has 87.3 percent parallel efficiency with 121 processors on an SGI ORIGIN. Grid generation is accomplished with the multistage Average Passage Grid Generator, APG. Results for each component are shown which compare favorably with test data.
A Multistage Approach for Image Registration.
Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi
2016-09-01
Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.
Yu Wei; Michael Bevers; Erin Belval; Benjamin Bird
2015-01-01
This research developed a chance-constrained two-stage stochastic programming model to support wildfire initial attack resource acquisition and location on a planning unit for a fire season. Fire growth constraints account for the interaction between fire perimeter growth and construction to prevent overestimation of resource requirements. We used this model to examine...
Risk management for sulfur dioxide abatement under multiple uncertainties
NASA Astrophysics Data System (ADS)
Dai, C.; Sun, W.; Tan, Q.; Liu, Y.; Lu, W. T.; Guo, H. C.
2016-03-01
In this study, interval-parameter programming, two-stage stochastic programming (TSP), and conditional value-at-risk (CVaR) were incorporated into a general optimization framework, leading to an interval-parameter CVaR-based two-stage programming (ICTP) method. The ICTP method had several advantages: (i) its objective function simultaneously took expected cost and risk cost into consideration, and also used discrete random variables and discrete intervals to reflect uncertain properties; (ii) it quantitatively evaluated the right tail of distributions of random variables which could better calculate the risk of violated environmental standards; (iii) it was useful for helping decision makers to analyze the trade-offs between cost and risk; and (iv) it was effective to penalize the second-stage costs, as well as to capture the notion of risk in stochastic programming. The developed model was applied to sulfur dioxide abatement in an air quality management system. The results indicated that the ICTP method could be used for generating a series of air quality management schemes under different risk-aversion levels, for identifying desired air quality management strategies for decision makers, and for considering a proper balance between system economy and environmental quality.
Multi-hazard evacuation route and shelter planning for buildings.
DOT National Transportation Integrated Search
2014-06-01
A bi-level, two-stage, binary stochastic program with equilibrium constraints, and three variants, are presented that : support the planning and design of shelters and exits, along with hallway fortification strategies and associated : evacuation pat...
Wang, Jun-Sheng; Yang, Guang-Hong
2017-07-25
This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.
Probabilistic Structural Analysis Theory Development
NASA Technical Reports Server (NTRS)
Burnside, O. H.
1985-01-01
The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
Multi-stage separations based on dielectrophoresis
Mariella, Jr., Raymond P.
2004-07-13
A system utilizing multi-stage traps based on dielectrophoresis. Traps with electrodes arranged transverse to the flow and traps with electrodes arranged parallel to the flow with combinations of direct current and alternating voltage are used to trap, concentrate, separate, and/or purify target particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z.; Liu, C.; Botterud, A.
Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecastingmore » techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.« less
NASA Astrophysics Data System (ADS)
Teoh, Joanne Ee Mei; Zhao, Yue; An, Jia; Chua, Chee Kai; Liu, Yong
2017-12-01
Shape memory polymers (SMPs) have gained a presence in additive manufacturing due to their role in 4D printing. They can be printed either in multi-materials for multi-stage shape recovery or in a single material for single-stage shape recovery. When printed in multi-materials, material or material-based design is used as a controlling factor for multi-stage shape recovery. However, when printed in a single material, it is difficult to design multi-stage shape recovery due to the lack of a controlling factor. In this research, we explore the use of geometric thickness as a controlling factor to design smart structures possessing multi-stage shape recovery using a single SMP. L-shaped hinges with a thickness ranging from 0.3-2 mm were designed and printed in four different SMPs. The effect of thickness on SMP’s response time was examined via both experiment and finite element analysis using Ansys transient thermal simulation. A method was developed to accurately measure the response time in millisecond resolution. Temperature distribution and heat transfer in specimens during thermal activation were also simulated and discussed. Finally, a spiral square and an artificial flower consisting of a single SMP were designed and printed with appropriate thickness variation for the demonstration of a controlled multi-stage shape recovery. Experimental results indicated that smart structures printed using single material with controlled thickness parameters are able to achieve controlled shape recovery characteristics similar to those printed with multiple materials and uniform geometric thickness. Hence, the geometric parameter can be used to increase the degree of freedom in designing future smart structures possessing complex shape recovery characteristics.
Approximate Dynamic Programming for Military Resource Allocation
2014-12-26
difference in means for W = 200, T = 200 ( c ) W = 200, T = 200 5 10 15 20 25 30 35 40 45 50 −2 0 2 4 6 8 Problem Number M ea n A D P − M ea n M M R 95...will provide analysts with a means for effectively determining which weapons concepts to explore further, how to appropriately fit a set of aircraft ...which optimization of the multi-stage DWTA is used to determine optimal weaponeering of aircraft . Because of its flexibility and applicability to
1988-05-01
the meet ehidmli i thm e mpesm of rmbrme pap Ii bprmaeIea s, IDA Mwmaim Ampad le eI.te umm emOw casm d One IqIammeis er~ wh eMA ls is mmidsmwkdMle...in turn, is controlled by the units above it. Dynamic programming is a mathematical technique well suited for optimization of multistage models. This...interval to a desired accuracy. Several region elimination methods have been discussed in the literature, including the Golden Section, Fibonacci
1964-03-01
The flame and exhaust from the test firing of an F-1 engine blast out from the Saturn S-IB Static Test Stand in the east test area of the Marshall Space Flight Center. A Cluster of five F-1 engines, located in the S-IC (first) stage of the Saturn V vehicle, provided over 7,500,000 pounds of thrust to launch the giant rocket. The towering 363-foot Saturn V was a multistage, multiengine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.
New generation of universal modeling for centrifugal compressors calculation
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
The Universal Modeling method is in constant use from mid - 1990th. Below is presented the newest 6th version of the Method. The flow path configuration of 3D impellers is presented in details. It is possible to optimize meridian configuration including hub/shroud curvatures, axial length, leading edge position, etc. The new model of vaned diffuser includes flow non-uniformity coefficient based on CFD calculations. The loss model was built from the results of 37 experiments with compressors stages of different flow rates and loading factors. One common set of empirical coefficients in the loss model guarantees the efficiency definition within an accuracy of 0.86% at the design point and 1.22% along the performance curve. The model verification was made. Four multistage compressors performances with vane and vaneless diffusers were calculated. As the model verification was made, four multistage compressors performances with vane and vaneless diffusers were calculated. Two of these compressors have quite unusual flow paths. The modeling results were quite satisfactory in spite of these peculiarities. One sample of the verification calculations is presented in the text. This 6th version of the developed computer program is being already applied successfully in the design practice.
NASA Technical Reports Server (NTRS)
Andrews, William H.; Holleman, Euclid C.
1960-01-01
An investigation was conducted to determine a human pilot's ability to control a multistage vehicle through the launch trajectory. The simulation was performed statically and dynamically by utilizing a human centrifuge. An interesting byproduct of the program was the three-axis side-located controller incorporated for pilot control inputs. This method of control proved to be acceptable for the successful completion of the tracking task during the simulation. There was no apparent effect of acceleration on the mechanical operation of the controller, but the pilot's control feel deteriorated as his dexterity decreased at high levels of acceleration. The application of control in a specific control mode was not difficult. However, coordination of more than one mode was difficult, and, in many instances, resulted in inadvertent control inputs. The acceptable control harmony at an acceleration level of 1 g became unacceptable at higher acceleration levels. Proper control-force harmony for a particular control task appears to be more critical for a three-axis controller than for conventional controllers. During simulations in which the pilot wore a pressure suit, the nature of the suit gloves further aggravated this condition.
NASA Astrophysics Data System (ADS)
Kijanka, Piotr; Jablonski, Adam; Dziedziech, Kajetan; Dworakowski, Ziemowit; Uhl, Tadeusz
2016-04-01
A large number of commercial systems for condition monitoring of most common planetary gearboxes used in wind turbines and mining machinery have been developed for years. However nowadays, multistage constructions are encountered in industries. These are not necessarily planetary, but generally epicyclic. Current state of the art, according to the authors knowledge, does not give general equations for a case where multistage systems are considered, where some of the gears consist all moving parts. Hence, currently available CMS systems are not suitable for condition monitoring of these kinds of systems. The paper presents a new general equation, which allows calculating the characteristic frequencies of any kind of multistage gear sets, as a result of theoretical investigation. Illustrated solution does not assume a fixed speed of any element. Moreover, presented equation takes into account corrected teeth, making developed equations most general from all available in tribology science. Presented scientific development is currently implemented in a modern European CMS.
On use of the multistage dose-response model for assessing laboratory animal carcinogenicity
Nitcheva, Daniella; Piegorsch, Walter W.; West, R. Webster
2007-01-01
We explore how well a statistical multistage model describes dose-response patterns in laboratory animal carcinogenicity experiments from a large database of quantal response data. The data are collected from the U.S. EPA’s publicly available IRIS data warehouse and examined statistically to determine how often higher-order values in the multistage predictor yield significant improvements in explanatory power over lower-order values. Our results suggest that the addition of a second-order parameter to the model only improves the fit about 20% of the time, while adding even higher-order terms apparently does not contribute to the fit at all, at least with the study designs we captured in the IRIS database. Also included is an examination of statistical tests for assessing significance of higher-order terms in a multistage dose-response model. It is noted that bootstrap testing methodology appears to offer greater stability for performing the hypothesis tests than a more-common, but possibly unstable, “Wald” test. PMID:17490794
Influence of dispatching rules on average production lead time for multi-stage production systems.
Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus
2013-08-01
In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.
Multistage Polymeric Lens Structures Integrated into Silica Waveguides
NASA Astrophysics Data System (ADS)
Tate, Atsushi; Suzuki, Takanori; Tsuda, Hiroyuki
2006-08-01
A waveguide lens, composed of multistage polymer-filled thin grooves in a silica planar lightwave circuit (PLC) is proposed and a low-loss structure has been designed. A waveguide lens in a silica slab waveguide has been fabricated using reactive ion etching (RIE) and formed by filling with polymer. Both an imagding optical system and a Fourier-transform optical system can be configured in a PLC using a waveguide lens. It renders the PLC functional and its design flexible. To obtain a shorter focal length with a low insertion loss, it is more effective to use a multistage lens structure. An imaging optical system and a Fourier-transform optical system with a focal length of less than 1000 μm were fabricated in silica waveguides using a multistage lens structure. The lens imaging waveguides incorporate a 16-24-stage lens, with insertion losses of 4-7 dB. A 4 × 4 optical coupler, using a Fourier-transform optical system, utilizes a 6-stage lens with losses of 2-4 dB.
Multistage degradation modeling for BLDC motor based on Wiener process
NASA Astrophysics Data System (ADS)
Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai
2018-05-01
Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.
A Nondeterministic Resource Planning Model in Education
ERIC Educational Resources Information Center
Yoda, Koji
1977-01-01
Discusses a simple technique for stochastic resource planning that, when computerized, can assist educational managers in the process of quantifying the future uncertainty, thereby, helping them make better decisions. The example used is a school lunch program. (Author/IRT)
Modeling Stochastic Energy and Water Consumption to Manage Residential Water Uses
NASA Astrophysics Data System (ADS)
Abdallah, A. M.; Rosenberg, D. E.; Water; Energy Conservation
2011-12-01
Water energy linkages have received growing attention from the water and energy utilities as utilities recognize that collaborative efforts can implement more effective conservation and efficiency improvement programs at lower cost with less effort. To date, limited energy-water household data has allowed only deterministic analysis for average, representative households and required coarse assumptions - like the water heater (the primary energy use in a home apart from heating and cooling) be a single end use. Here, we use recent available disaggregated hot and cold water household end-use data to estimate water and energy consumption for toilet, shower, faucet, dishwasher, laundry machine, leaks, and other household uses and savings from appliance retrofits. The disaggregated hot water and bulk water end-use data was previously collected by the USEPA for 96 single family households in Seattle WA and Oakland CA, and Tampa FL between the period from 2000 and 2003 for two weeks before and four weeks after each household was retrofitted with water efficient appliances. Using the disaggregated data, we developed a stochastic model that represents factors that influence water use for each appliance: behavioral (use frequency and duration), demographical (household size), and technological (use volume or flowrate). We also include stochastic factors that govern energy to heat hot water: hot water fraction (percentage of hot water volume to total water volume used in a certain end-use event), heater water intake and dispense temperatures, and energy source for the heater (gas, electric, etc). From the empirical household end-use data, we derive stochastic probability distributions for each water and energy factor where each distribution represents the range and likelihood of values that the factor may take. The uncertainty of the stochastic water and energy factors is propagated using Monte Carlo simulations to calculate the composite probability distribution for water and energy use, potential savings, and payback periods to install efficient water end-use appliances and fixtures. Stochastic model results show the distributions among households for (i) water end-use, (ii) energy consumed to use water, and (iii) financial payback periods. Compared to deterministic analysis, stochastic modeling results show that hot water fractions for appliances follow normal distributions with high standard deviation and reveal pronounced variations among households that significantly affect energy savings and payback period estimates. These distributions provide an important tool to select and size water conservation programs to simultaneously meet both water and energy conservation goals. They also provide a way to identify and target a small fraction of customers with potential to save large water volumes and energy from appliance retrofits. Future work will embed this household scale stochastic model in city-scale models to identify win-win water management opportunities where households save money by conserving water and energy while cities avoid costs, downsize, or delay infrastructure development.
Laboratory research of fracture geometry in multistage HFF in triaxial state
NASA Astrophysics Data System (ADS)
Bondarenko, T. M.; Hou, B.; Chen, M.; Yan, L.
2017-05-01
Multistage hydraulic fracturing of formation (HFF) in wells with horizontal completion is an efficientmethod for intensifying oil extraction which, as a rule, is used to develop nontraditional collectors. It is assumed that the complicated character of HFF fractures significantly influences the fracture geometry in the rock matrix. Numerous theoretical models proposed to predict the fracture geometry and the character of interaction of mechanical stresses in the multistage HFF have not been proved experimentally. In this paper, we present the results of laboratory modeling of the multistage HFF performed on a contemporary laboratory-scale plant in the triaxial stress state by using a gel-solution as the HFF agent. As a result of the experiment, a fracturing pattern was formed in the cubic specimen of the model material. The laboratory results showed that a nearly plane fracture is formed at the firstHFF stage, while a concave fracture is formed at the second HFF stage. The interaction of the stress fields created by the two principal HFF fractures results in the growth of secondary fractures whose directions turned out to be parallel to the modeled well bore. But this stress interference leads to a decrease in the width of the second principal fracture. It is was discovered that the penny-shaped fracture model is more appropriate for predicting the geometry of HFF fractures in horizontal wells than the two-dimensional models of fracture propagation (PKN model, KGD model). A computational experiment based on the boundary element method was carried out to obtain the qualitative description of the multistage HFF processes. As a result, a mechanical model of fracture propagation was constructed,which was used to obtain the mechanical stress field (the stress contrast) and the fracture opening angle distribution over fracture length and fracture orientation direction. The conclusions made in the laboratory modeling of the multistage HFF technology agree well with the conclusions made in the computational experiment. Special attention must be paid to the design of the HFF stage spacing density in the implementation of the multistage HFF in wells with horizontal completion.
Asymmetric and Stochastic Behavior in Magnetic Vortices Studied by Soft X-ray Microscopy
NASA Astrophysics Data System (ADS)
Im, Mi-Young
Asymmetry and stochasticity in spin processes are not only long-standing fundamental issues but also highly relevant to technological applications of nanomagnetic structures to memory and storage nanodevices. Those nontrivial phenomena have been studied by direct imaging of spin structures in magnetic vortices utilizing magnetic transmission soft x-ray microscopy (BL6.1.2 at ALS). Magnetic vortices have attracted enormous scientific interests due to their fascinating spin structures consisting of circularity rotating clockwise (c = + 1) or counter-clockwise (c = -1) and polarity pointing either up (p = + 1) or down (p = -1). We observed a symmetry breaking in the formation process of vortex structures in circular permalloy (Ni80Fe20) disks. The generation rates of two different vortex groups with the signature of cp = + 1 and cp =-1 are completely asymmetric. The asymmetric nature was interpreted to be triggered by ``intrinsic'' Dzyaloshinskii-Moriya interaction (DMI) arising from the spin-orbit coupling due to the lack of inversion symmetry near the disk surface and ``extrinsic'' factors such as roughness and defects. We also investigated the stochastic behavior of vortex creation in the arrays of asymmetric disks. The stochasticity was found to be very sensitive to the geometry of disk arrays, particularly interdisk distance. The experimentally observed phenomenon couldn't be explained by thermal fluctuation effect, which has been considered as a main reason for the stochastic behavior in spin processes. We demonstrated for the first time that the ultrafast dynamics at the early stage of vortex creation, which has a character of classical chaos significantly affects the stochastic nature observed at the steady state in asymmetric disks. This work provided the new perspective of dynamics as a critical factor contributing to the stochasticity in spin processes and also the possibility for the control of the intrinsic stochastic nature by optimizing the design of asymmetric disk arrays. This work was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, by Leading Foreign Research Institute Recruitment Program through the NRF.
MONALISA for stochastic simulations of Petri net models of biochemical systems.
Balazki, Pavel; Lindauer, Klaus; Einloft, Jens; Ackermann, Jörg; Koch, Ina
2015-07-10
The concept of Petri nets (PN) is widely used in systems biology and allows modeling of complex biochemical systems like metabolic systems, signal transduction pathways, and gene expression networks. In particular, PN allows the topological analysis based on structural properties, which is important and useful when quantitative (kinetic) data are incomplete or unknown. Knowing the kinetic parameters, the simulation of time evolution of such models can help to study the dynamic behavior of the underlying system. If the number of involved entities (molecules) is low, a stochastic simulation should be preferred against the classical deterministic approach of solving ordinary differential equations. The Stochastic Simulation Algorithm (SSA) is a common method for such simulations. The combination of the qualitative and semi-quantitative PN modeling and stochastic analysis techniques provides a valuable approach in the field of systems biology. Here, we describe the implementation of stochastic analysis in a PN environment. We extended MONALISA - an open-source software for creation, visualization and analysis of PN - by several stochastic simulation methods. The simulation module offers four simulation modes, among them the stochastic mode with constant firing rates and Gillespie's algorithm as exact and approximate versions. The simulator is operated by a user-friendly graphical interface and accepts input data such as concentrations and reaction rate constants that are common parameters in the biological context. The key features of the simulation module are visualization of simulation, interactive plotting, export of results into a text file, mathematical expressions for describing simulation parameters, and up to 500 parallel simulations of the same parameter sets. To illustrate the method we discuss a model for insulin receptor recycling as case study. We present a software that combines the modeling power of Petri nets with stochastic simulation of dynamic processes in a user-friendly environment supported by an intuitive graphical interface. The program offers a valuable alternative to modeling, using ordinary differential equations, especially when simulating single-cell experiments with low molecule counts. The ability to use mathematical expressions provides an additional flexibility in describing the simulation parameters. The open-source distribution allows further extensions by third-party developers. The software is cross-platform and is licensed under the Artistic License 2.0.
Numerical solutions of 2-D multi-stage rotor/stator unsteady flow interactions
NASA Astrophysics Data System (ADS)
Yang, R.-J.; Lin, S.-J.
1991-01-01
The Rai method of single-stage rotor/stator flow interaction is extended to handle multistage configurations. In this study, a two-dimensional Navier-Stokes multi-zone approach was used to investigate unsteady flow interactions within two multistage axial turbines. The governing equations are solved by an iterative, factored, implicit finite-difference, upwind algorithm. Numerical accuracy is checked by investigating the effect of time step size, the effect of subiteration in the Newton-Raphson technique, and the effect of full viscous versus thin-layer approximation. Computer results compared well with experimental data. Unsteady flow interactions, wake cutting, and the associated evolution of vortical entities are discussed.
Maisotsenko cycle applications in multi-stage ejector recycling module for chemical production
NASA Astrophysics Data System (ADS)
Levchenko, D. O.; Artyukhov, A. E.; Yurko, I. V.
2017-08-01
The article is devoted to the theoretical bases of multistage (multi-level) utilization modules as part of chemical plants (on the example of the technological line for obtaining nitrogen fertilizers). The possibility of recycling production waste (ammonia vapors, dust and substandard nitrogen fertilizers) using ejection devices and waste heat using Maisotsenko cycle technology (Maisotsenko heat and mass exchanger (HMX), Maisotsenko power cycles and recuperators, etc.) is substantiated. The principle of operation of studied recycling module and prospects for its implementation are presented. An improved technological scheme for obtaining granular fertilizers and granules with porous structure with multistage (multi-level) recycling module is proposed.
Multi-stage decoding of multi-level modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.
1991-01-01
Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).
Shock and vibration response of multistage structure
NASA Technical Reports Server (NTRS)
Lee, S. Y.; Liyeos, J. G.; Tang, S. S.
1968-01-01
Study of the shock and vibration response of a multistage structure employed analytically, lumped-mass, continuous-beam, multimode, and matrix-iteration methods. The study was made on the load paths, transmissibility, and attenuation properties along a longitudinal axis of a long, slender structure with increasing degree of complexity.
Microfiltration of thin stillage: Process simulation and economic analyses
USDA-ARS?s Scientific Manuscript database
In plant scale operations, multistage membrane systems have been adopted for cost minimization. We considered design optimization and operation of a continuous microfiltration (MF) system for the corn dry grind process. The objectives were to develop a model to simulate a multistage MF system, optim...
40 CFR 600.316-08 - Multistage manufacture.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Multistage manufacture. 600.316-08 Section 600.316-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Labeling § 600.316-08...
40 CFR 600.316-08 - Multistage manufacture.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Multistage manufacture. 600.316-08 Section 600.316-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Labeling § 600.316-08...
40 CFR 600.316-08 - Multistage manufacture.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 30 2014-07-01 2014-07-01 false Multistage manufacture. 600.316-08 Section 600.316-08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Labeling § 600.316-08...
Oil-free centrifugal hydrogen compression technology demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heshmat, Hooshang
2014-05-31
One of the key elements in realizing a mature market for hydrogen vehicles is the deployment of a safe and efficient hydrogen production and delivery infrastructure on a scale that can compete economically with current fuels. The challenge, however, is that hydrogen, being the lightest and smallest of gases with a lower viscosity and density than natural gas, readily migrates through small spaces and is difficult to compresses efficiently. While efficient and cost effective compression technology is crucial to effective pipeline delivery of hydrogen, the compression methods used currently rely on oil lubricated positive displacement (PD) machines. PD compression technologymore » is very costly, has poor reliability and durability, especially for components subjected to wear (e.g., valves, rider bands and piston rings) and contaminates hydrogen with lubricating fluid. Even so called “oil-free” machines use oil lubricants that migrate into and contaminate the gas path. Due to the poor reliability of PD compressors, current hydrogen producers often install duplicate units in order to maintain on-line times of 98-99%. Such machine redundancy adds substantially to system capital costs. As such, DOE deemed that low capital cost, reliable, efficient and oil-free advanced compressor technologies are needed. MiTi’s solution is a completely oil-free, multi-stage, high-speed, centrifugal compressor designed for flow capacity of 500,000 kg/day with a discharge pressure of 1200 psig. The design employs oil-free compliant foil bearings and seals to allow for very high operating speeds, totally contamination free operation, long life and reliability. This design meets the DOE’s performance targets and achieves an extremely aggressive, specific power metric of 0.48 kW-hr/kg and provides significant improvements in reliability/durability, energy efficiency, sealing and freedom from contamination. The multi-stage compressor system concept has been validated through full scale performance testing of a single stage with helium similitude gas at full speed in accordance with ASME PTC-10. The experimental results indicated that aerodynamic performance, with respect to compressor discharge pressure, flow, power and efficiency exceeded theoretical prediction. Dynamic testing of a simulated multistage centrifugal compressor was also completed under a parallel program to validate the integrity and viability of the system concept. The results give strong confidence in the feasibility of the multi-stage design for use in hydrogen gas transportation and delivery from production locations to point of use.« less
Genetic progress in multistage dairy cattle breeding schemes using genetic markers.
Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P
2005-04-01
The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.
Enhancements and Algorithms for Avionic Information Processing System Design Methodology.
1982-06-16
programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP
Discrete Time McKean–Vlasov Control Problem: A Dynamic Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, Huyên, E-mail: pham@math.univ-paris-diderot.fr; Wei, Xiaoli, E-mail: tyswxl@gmail.com
We consider the stochastic optimal control problem of nonlinear mean-field systems in discrete time. We reformulate the problem into a deterministic control problem with marginal distribution as controlled state variable, and prove that dynamic programming principle holds in its general form. We apply our method for solving explicitly the mean-variance portfolio selection and the multivariate linear-quadratic McKean–Vlasov control problem.
The evolution of tumor metastases during clonal expansion.
Haeno, Hiroshi; Michor, Franziska
2010-03-07
Cancer is a leading cause of morbidity and mortality in many countries. Solid tumors generally initiate at one particular site called the primary tumor, but eventually disseminate and form new colonies in other organs. The development of such metastases greatly diminishes the potential for a cure of patients and is thought to represent the final stage of the multi-stage progression of human cancer. The concept of early metastatic dissemination, however, postulates that cancer cell spread might arise early during the development of a tumor. It is important to know whether metastases are present at diagnosis since this determines treatment strategies and outcome. In this paper, we design a stochastic mathematical model of the evolution of tumor metastases in an expanding cancer cell population. We calculate the probability of metastasis at a given time during tumor evolution, the expected number of metastatic sites, and the total number of cancer cells as well as metastasized cells. Furthermore, we investigate the effect of drug administration and tumor resection on these quantities and predict the survival time of cancer patients. The model presented in this paper allows us to determine the probability and number of metastases at diagnosis and to identify the optimum treatment strategy to maximally prolong survival of cancer patients. 2009 Elsevier Ltd. All rights reserved.
Climate change adaptation: a panacea for food security in Ondo State, Nigeria
NASA Astrophysics Data System (ADS)
Fatuase, A. I.
2017-08-01
This paper examines the likely perceived causes of climate change, adaptation strategies employed and technical inefficiency of arable crop farmers in Ondo State, Nigeria. Data were obtained from primary sources using a set of structured questionnaire assisted with interview schedule. Multistage sampling technique was used. Data were analyzed using the following: descriptive statistics and the stochastic frontier production function. The findings showed that majority of the respondents (59.1 %) still believed that climate change is a natural phenomenon that is beyond man's power to abate while industrial release, improper sewage disposal, fossil fuel use, deforestation and bush burning were perceived as the most human factors that influence climate change by the category that chose human activities (40.9 %) as the main causes of climate change. The main employed adaptation strategies by the farmers were mixed cropping, planting early matured crop, planting of resistant crops and use of agrochemicals. The arable crop farmers were relatively technically efficient with about 53 % of them having technical efficiency above the average of 0.784 for the study area. The study observed that education, adaptation, perception, climate information and farming experience were statistically significant in decreasing inefficiency of arable crop production. Therefore, advocacy on climate change and its adaptation strategies should be intensified in the study area.
Multi-stage volcanic island flank collapses with coeval explosive caldera-forming eruptions.
Hunt, James E; Cassidy, Michael; Talling, Peter J
2018-01-18
Volcanic flank collapses and explosive eruptions are among the largest and most destructive processes on Earth. Events at Mount St. Helens in May 1980 demonstrated how a relatively small (<5 km 3 ) flank collapse on a terrestrial volcano could immediately precede a devastating eruption. The lateral collapse of volcanic island flanks, such as in the Canary Islands, can be far larger (>300 km 3 ), but can also occur in complex multiple stages. Here, we show that multistage retrogressive landslides on Tenerife triggered explosive caldera-forming eruptions, including the Diego Hernandez, Guajara and Ucanca caldera eruptions. Geochemical analyses were performed on volcanic glasses recovered from marine sedimentary deposits, called turbidites, associated with each individual stage of each multistage landslide. These analyses indicate only the lattermost stages of subaerial flank failure contain materials originating from respective coeval explosive eruption, suggesting that initial more voluminous submarine stages of multi-stage flank collapse induce these aforementioned explosive eruption. Furthermore, there are extended time lags identified between the individual stages of multi-stage collapse, and thus an extended time lag between the initial submarine stages of failure and the onset of subsequent explosive eruption. This time lag succeeding landslide-generated static decompression has implications for the response of magmatic systems to un-roofing and poses a significant implication for ocean island volcanism and civil emergency planning.
NASA Technical Reports Server (NTRS)
Liou, Luen-Woei; Ray, Asok
1991-01-01
A state feedback control law for integrated communication and control systems (ICCS) is formulated by using the dynamic programming and optimality principle on a finite-time horizon. The control law is derived on the basis of a stochastic model of the plant which is augmented in state space to allow for the effects of randomly varying delays in the feedback loop. A numerical procedure for synthesizing the control parameters is then presented, and the performance of the control law is evaluated by simulating the flight dynamics model of an advanced aircraft. Finally, recommendations for future work are made.
NASA Astrophysics Data System (ADS)
Levine, Zachary H.; Pintar, Adam L.
2015-11-01
A simple algorithm for averaging a stochastic sequence of 1D arrays in a moving, expanding window is provided. The samples are grouped in bins which increase exponentially in size so that a constant fraction of the samples is retained at any point in the sequence. The algorithm is shown to have particular relevance for a class of Monte Carlo sampling problems which includes one characteristic of iterative reconstruction in computed tomography. The code is available in the CPC program library in both Fortran 95 and C and is also available in R through CRAN.
An agent-based stochastic Occupancy Simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yixing; Hong, Tianzhen; Luo, Xuan
Occupancy has significant impacts on building performance. However, in current building performance simulation programs, occupancy inputs are static and lack diversity, contributing to discrepancies between the simulated and actual building performance. This work presents an Occupancy Simulator that simulates the stochastic behavior of occupant presence and movement in buildings, capturing the spatial and temporal occupancy diversity. Each occupant and each space in the building are explicitly simulated as an agent with their profiles of stochastic behaviors. The occupancy behaviors are represented with three types of models: (1) the status transition events (e.g., first arrival in office) simulated with probability distributionmore » model, (2) the random moving events (e.g., from one office to another) simulated with a homogeneous Markov chain model, and (3) the meeting events simulated with a new stochastic model. A hierarchical data model was developed for the Occupancy Simulator, which reduces the amount of data input by using the concepts of occupant types and space types. Finally, a case study of a small office building is presented to demonstrate the use of the Simulator to generate detailed annual sub-hourly occupant schedules for individual spaces and the whole building. The Simulator is a web application freely available to the public and capable of performing a detailed stochastic simulation of occupant presence and movement in buildings. Future work includes enhancements in the meeting event model, consideration of personal absent days, verification and validation of the simulated occupancy results, and expansion for use with residential buildings.« less
An agent-based stochastic Occupancy Simulator
Chen, Yixing; Hong, Tianzhen; Luo, Xuan
2017-06-01
Occupancy has significant impacts on building performance. However, in current building performance simulation programs, occupancy inputs are static and lack diversity, contributing to discrepancies between the simulated and actual building performance. This work presents an Occupancy Simulator that simulates the stochastic behavior of occupant presence and movement in buildings, capturing the spatial and temporal occupancy diversity. Each occupant and each space in the building are explicitly simulated as an agent with their profiles of stochastic behaviors. The occupancy behaviors are represented with three types of models: (1) the status transition events (e.g., first arrival in office) simulated with probability distributionmore » model, (2) the random moving events (e.g., from one office to another) simulated with a homogeneous Markov chain model, and (3) the meeting events simulated with a new stochastic model. A hierarchical data model was developed for the Occupancy Simulator, which reduces the amount of data input by using the concepts of occupant types and space types. Finally, a case study of a small office building is presented to demonstrate the use of the Simulator to generate detailed annual sub-hourly occupant schedules for individual spaces and the whole building. The Simulator is a web application freely available to the public and capable of performing a detailed stochastic simulation of occupant presence and movement in buildings. Future work includes enhancements in the meeting event model, consideration of personal absent days, verification and validation of the simulated occupancy results, and expansion for use with residential buildings.« less
KINETIC STUDIES RELATED TO THE LIMB (LIMESTONE INJECTION MULTISTAGE BURNER) BURNER
The report gives results of theoretical and experimental studies of subjects related to the limestone injection multistage burner (LIMB). The main findings include data on the rate of evolution of H2S from different coals and on the dependence of the rate of evolution on the dist...
Optimal Testlet Pool Assembly for Multistage Testing Designs
ERIC Educational Resources Information Center
Ariel, Adelaide; Veldkamp, Bernard P.; Breithaupt, Krista
2006-01-01
Computerized multistage testing (MST) designs require sets of test questions (testlets) to be assembled to meet strict, often competing criteria. Rules that govern testlet assembly may dictate the number of questions on a particular subject or may describe desirable statistical properties for the test, such as measurement precision. In an MST…
Multi-stage continuous (chemostat) culture fermentation (MCCF) with variable fermentor volumes was carried out to study utilizing glucose and xylose for ethanol production by means of mixed sugar fermentation (MSF). Variable fermentor volumes were used to enable enhanced sugar u...
Hong, Kyungeui; Gittelsohn, Joel; Joung, Hyojee
2010-06-01
The purpose of this study was to investigate the effects of personal characteristics and theory of planned behavior (TPB) constructs on the intention to participate in a restaurant health promotion program. In total, 830 adults residing in Seoul were sampled by a multi-stage cluster and random sampling design. Data were collected from a structured self-administered questionnaire, which covered variables concerning demographics, health status and TPB constructs including attitude, subjective norm and perceived behavioral control. A path analysis combining personal characteristics and TPB constructs was used to investigate determinants of the customers' intention. Positive and negative attitudes, subjective norms and perceived behavioral control directly affected the intention to participate. Demographics and health status both directly and indirectly affected the intention to participate. This study identifies personal characteristics and TPB constructs that are important to planning and implementing a restaurant health promotion program.
Kawafha, Mariam M; Tawalbeh, Loai Issa
2015-04-01
The purpose of this study was to examine the effect of an asthma education program on schoolteachers' knowledge. Pre-test-post-test experimental randomized controlled design was used. A multistage-cluster sampling technique was used to randomly select governorate, primary schools, and schoolteachers. Schoolteachers were randomly assigned either to the experimental group (n = 36) and attended three educational sessions or to the control group (n = 38) who did not receive any intervention. Knowledge about asthma was measured using the Asthma General Knowledge Questionnaire for Adults (AGKQA). The results indicated that teachers in the experimental group showed significantly (p < .001) higher knowledge of asthma in the first post-test and the second post-test compared with those in the control group. Implementing asthma education enhanced schoolteachers' knowledge of asthma. The asthma education program should target schoolteachers to improve knowledge about asthma. © The Author(s) 2014.
Environmental monitoring for the DOE coolside and LIMB demonstration extension projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, T.; Contos, L.; Adams, L.
1992-03-01
The purpose of this document is to present environmental monitoring data collected during the US Department of Energy Limestone Injection Multistage Burner (DOE LIMB) Demonstration Project Extension at the Ohio Edison Edgewater Generating Station in Lorain, Ohio. The DOE project is an extension of the US Environmental Protection Agency's (EPA's) original LIMB Demonstration. The program is operated nuclear DOE's Clean Coal Technology Program of emerging clean coal technologies'' under the categories of in boiler control of oxides of sulfur and nitrogen'' as well as post-combustion clean-up.'' The objective of the LIMB program is to demonstrate the sulfur dioxide (SO{sub 2})more » and nitrogen oxide (NO{sub x}) emission reduction capabilities of the LIMB system. The LIMB system is a retrofit technology to be used for existing coal-fired boilers equipped with electrostatic precipitators (ESPs).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, T.; Contos, L.; Adams, L.
1992-03-01
The purpose of this document is to present environmental monitoring data collected during the US Department of Energy Limestone Injection Multistage Burner (DOE LIMB) Demonstration Project Extension at the Ohio Edison Edgewater Generating Station in Lorain, Ohio. The DOE project is an extension of the US Environmental Protection Agency`s (EPA`s) original LIMB Demonstration. The program is operated nuclear DOE`s Clean Coal Technology Program of ``emerging clean coal technologies`` under the categories of ``in boiler control of oxides of sulfur and nitrogen`` as well as ``post-combustion clean-up.`` The objective of the LIMB program is to demonstrate the sulfur dioxide (SO{sub 2})more » and nitrogen oxide (NO{sub x}) emission reduction capabilities of the LIMB system. The LIMB system is a retrofit technology to be used for existing coal-fired boilers equipped with electrostatic precipitators (ESPs).« less
NASA Astrophysics Data System (ADS)
Chiavico, Mattia; Raso, Luciano; Dorchies, David; Malaterre, Pierre-Olivier
2015-04-01
Seine river region is an extremely important logistic and economic junction for France and Europe. The hydraulic protection of most part of the region relies on four controlled reservoirs, managed by EPTB Seine-Grands Lacs. Presently, reservoirs operation is not centrally coordinated, and release rules are based on empirical filling curves. In this study, we analyze how a centralized release policy can face flood and drought risks, optimizing water system efficiency. The optimal and centralized decisional problem is solved by Stochastic Dual Dynamic Programming (SDDP) method, minimizing an operational indicator for each planning objective. SDDP allows us to include into the system: 1) the hydrological discharge, specifically a stochastic semi-distributed auto-regressive model, 2) the hydraulic transfer model, represented by a linear lag and route model, and 3) reservoirs and diversions. The novelty of this study lies on the combination of reservoir and hydraulic models in SDDP for flood and drought protection problems. The study case covers the Seine basin until the confluence with Aube River: this system includes two reservoirs, the city of Troyes, and the Nuclear power plant of Nogent-Sur-Seine. The conflict between the interests of flood protection, drought protection, water use and ecology leads to analyze the environmental system in a Multi-Objective perspective.
Modelling biochemical reaction systems by stochastic differential equations with reflection.
Niu, Yuanling; Burrage, Kevin; Chen, Luonan
2016-05-07
In this paper, we gave a new framework for modelling and simulating biochemical reaction systems by stochastic differential equations with reflection not in a heuristic way but in a mathematical way. The model is computationally efficient compared with the discrete-state Markov chain approach, and it ensures that both analytic and numerical solutions remain in a biologically plausible region. Specifically, our model mathematically ensures that species numbers lie in the domain D, which is a physical constraint for biochemical reactions, in contrast to the previous models. The domain D is actually obtained according to the structure of the corresponding chemical Langevin equations, i.e., the boundary is inherent in the biochemical reaction system. A variant of projection method was employed to solve the reflected stochastic differential equation model, and it includes three simple steps, i.e., Euler-Maruyama method was applied to the equations first, and then check whether or not the point lies within the domain D, and if not perform an orthogonal projection. It is found that the projection onto the closure D¯ is the solution to a convex quadratic programming problem. Thus, existing methods for the convex quadratic programming problem can be employed for the orthogonal projection map. Numerical tests on several important problems in biological systems confirmed the efficiency and accuracy of this approach. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Wen; Fung, Richard Y. K.
2014-06-01
This article considers an order acceptance problem in a make-to-stock manufacturing system with multiple demand classes in a finite time horizon. Demands in different periods are random variables and are independent of one another, and replenishments of inventory deviate from the scheduled quantities. The objective of this work is to maximize the expected net profit over the planning horizon by deciding the fraction of the demand that is going to be fulfilled. This article presents a stochastic order acceptance optimization model and analyses the existence of the optimal promising policies. An example of a discrete problem is used to illustrate the policies by applying the dynamic programming method. In order to solve the continuous problems, a heuristic algorithm based on stochastic approximation (HASA) is developed. Finally, the computational results of a case example illustrate the effectiveness and efficiency of the HASA approach, and make the application of the proposed model readily acceptable.
Pan, Wei; Guo, Ying; Jin, Lei; Liao, ShuJie
2017-01-01
With the high accident rate of civil aviation, medical resource inventory becomes more important for emergency management at the airport. Meanwhile, medical products usually are time-sensitive and short lifetime. Moreover, we find that the optimal medical resource inventory depends on multiple factors such as different risk preferences, the material shelf life and so on. Thus, it becomes very complex in a real-life environment. According to this situation, we construct medical resource inventory decision model for emergency preparation at the airport. Our model is formulated in such a way as to simultaneously consider uncertain demand, stochastic occurrence time and different risk preferences. For solving this problem, a new programming is developed. Finally, a numerical example is presented to illustrate the proposed method. The results show that it is effective for determining the optimal medical resource inventory for emergency preparation with uncertain demand and stochastic occurrence time under considering different risk preferences at the airport. PMID:28931007
Mauricio-Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane L; Sin, Gürkan
2015-05-15
The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems. Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow given the probability of a future rain event. The methodology is successfully applied to design an optimising control strategy for a subcatchment area in Copenhagen. The results are promising and expected to contribute to the advance of the operation and control problem of sewer systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Pan, Wei; Guo, Ying; Jin, Lei; Liao, ShuJie
2017-01-01
With the high accident rate of civil aviation, medical resource inventory becomes more important for emergency management at the airport. Meanwhile, medical products usually are time-sensitive and short lifetime. Moreover, we find that the optimal medical resource inventory depends on multiple factors such as different risk preferences, the material shelf life and so on. Thus, it becomes very complex in a real-life environment. According to this situation, we construct medical resource inventory decision model for emergency preparation at the airport. Our model is formulated in such a way as to simultaneously consider uncertain demand, stochastic occurrence time and different risk preferences. For solving this problem, a new programming is developed. Finally, a numerical example is presented to illustrate the proposed method. The results show that it is effective for determining the optimal medical resource inventory for emergency preparation with uncertain demand and stochastic occurrence time under considering different risk preferences at the airport.
NASA Technical Reports Server (NTRS)
Ohara, D.; Vo, T.; Vedder, J. F.
1985-01-01
A multistage open-tube trap for cryogenic collection of trace components in low-pressure air samples is described. The open-tube design allows higher volumetric flow rates than densely packed glass-bead traps commonly reported and is suitable for air samples at pressures below 27 kPa with liquid nitrogen as the cryogen. Gas blends containing 200 to 2500 parts per trillion by volume each of ethane and ethene were sampled and hydrocarbons were enriched with 100 + or - 4 percent trap efficiency. The multistage design is more efficient than equal-length open-tube traps under the conditions of the measurements.
Loading, Release, Biodegradation, and Biocompatibility of a Nanovector Delivery System
NASA Technical Reports Server (NTRS)
Ferrai, Mauro; Tasciotti, Ennio
2012-01-01
A nanovector multistage system has been created to overcome or bypass sequential barriers within the human body, in order to deliver a therapeutic or imaging agent to a specific location. This innovation consists of a composition that includes two or more stages of particles, such that smaller, later-stage particles are contained in the larger, early-stage particles. An active agent, such as a therapeutic agent or imaging agent, is preferentially delivered and/or localized to a particular target site in the body of a subject. The multistage composition overcomes multiple biological barriers in the body. The multistage composition also allows for simultaneous delivery and localization at the same or different target sites of multiple active agents.
NASA Astrophysics Data System (ADS)
Abellán-Nebot, J. V.; Liu, J.; Romero, F.
2009-11-01
The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.
CLIGEN: Addressing deficiencies in the generator and its databases
USDA-ARS?s Scientific Manuscript database
CLIGEN is a stochastic generator that estimates daily temperatures, precipitation and other weather related phenomena. It is an intermediate model used by the Water Erosion Prediction Program (WEPP), the Wind Erosion Prediction System (WEPS), and other models that require daily weather observations....
Season of conception in rural Gambia affects DNA methylation at putative human metastable epialleles
USDA-ARS?s Scientific Manuscript database
Throughout most of the mammalian genome, genetically regulated developmental programming establishes diverse yet predictable epigenetic states across differentiated cells and tissues. At metastable epialleles (MEs), conversely, epigenotype is established stochastically in the early embryo then maint...
Ant Lion Optimization algorithm for kidney exchanges.
Hamouda, Eslam; El-Metwally, Sara; Tarek, Mayada
2018-01-01
The kidney exchange programs bring new insights in the field of organ transplantation. They make the previously not allowed surgery of incompatible patient-donor pairs easier to be performed on a large scale. Mathematically, the kidney exchange is an optimization problem for the number of possible exchanges among the incompatible pairs in a given pool. Also, the optimization modeling should consider the expected quality-adjusted life of transplant candidates and the shortage of computational and operational hospital resources. In this article, we introduce a bio-inspired stochastic-based Ant Lion Optimization, ALO, algorithm to the kidney exchange space to maximize the number of feasible cycles and chains among the pool pairs. Ant Lion Optimizer-based program achieves comparable kidney exchange results to the deterministic-based approaches like integer programming. Also, ALO outperforms other stochastic-based methods such as Genetic Algorithm in terms of the efficient usage of computational resources and the quantity of resulting exchanges. Ant Lion Optimization algorithm can be adopted easily for on-line exchanges and the integration of weights for hard-to-match patients, which will improve the future decisions of kidney exchange programs. A reference implementation for ALO algorithm for kidney exchanges is written in MATLAB and is GPL licensed. It is available as free open-source software from: https://github.com/SaraEl-Metwally/ALO_algorithm_for_Kidney_Exchanges.
NASA Astrophysics Data System (ADS)
Wu, Xiaohua; Hu, Xiaosong; Moura, Scott; Yin, Xiaofeng; Pickert, Volker
2016-11-01
Energy management strategies are instrumental in the performance and economy of smart homes integrating renewable energy and energy storage. This article focuses on stochastic energy management of a smart home with PEV (plug-in electric vehicle) energy storage and photovoltaic (PV) array. It is motivated by the challenges associated with sustainable energy supplies and the local energy storage opportunity provided by vehicle electrification. This paper seeks to minimize a consumer's energy charges under a time-of-use tariff, while satisfying home power demand and PEV charging requirements, and accommodating the variability of solar power. First, the random-variable models are developed, including Markov Chain model of PEV mobility, as well as predictive models of home power demand and PV power supply. Second, a stochastic optimal control problem is mathematically formulated for managing the power flow among energy sources in the smart home. Finally, based on time-varying electricity price, we systematically examine the performance of the proposed control strategy. As a result, the electric cost is 493.6% less for a Tesla Model S with optimal stochastic dynamic programming (SDP) control relative to the no optimal control case, and it is by 175.89% for a Nissan Leaf.
Stochastic optimal operation of reservoirs based on copula functions
NASA Astrophysics Data System (ADS)
Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen
2018-02-01
Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.
A Top-Down Approach to Designing the Computerized Adaptive Multistage Test
ERIC Educational Resources Information Center
Luo, Xiao; Kim, Doyoung
2018-01-01
The top-down approach to designing a multistage test is relatively understudied in the literature and underused in research and practice. This study introduced a route-based top-down design approach that directly sets design parameters at the test level and utilizes the advanced automated test assembly algorithm seeking global optimality. The…
Rapier, P.M.
1980-06-26
A multi-stage flash degaser is incorporated in an energy conversion system having a direct-contact, binary-fluid heat exchanger to remove essentially all of the noncondensable gases from geothermal brine ahead of the direct-contact binary-fluid heat exchanger in order that the heat exchanger and a turbine and condenser of the system can operate at optimal efficiency.
Panel Design Variations in the Multistage Test Using the Mixed-Format Tests
ERIC Educational Resources Information Center
Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.; Park, Ryoungsun
2012-01-01
This study compared various panel designs of the multistage test (MST) using mixed-format tests in the context of classification testing. Simulations varied the design of the first-stage module. The first stage was constructed according to three levels of test information functions (TIFs) with three different TIF centers. Additional computerized…
A Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success
ERIC Educational Resources Information Center
Luong, Ming; Stevens, Jeff
2015-01-01
The Multi-Stage Maturity Model for Long-Term IT Outsourcing Relationship Success, a theoretical stages-of-growth model, explains long-term success in IT outsourcing relationships. Research showed the IT outsourcing relationship life cycle consists of four distinct, sequential stages: contract, transition, support, and partnership. The model was…
NASA Astrophysics Data System (ADS)
Umezu, Yasuyoshi; Watanabe, Yuko; Ma, Ninshu
2005-08-01
Since 1996, Japan Research Institute Limited (JRI) has been providing a sheet metal forming simulation system called JSTAMP-Works packaged the FEM solvers of LS-DYNA and JOH/NIKE, which might be the first multistage system at that time and has been enjoying good reputation among users in Japan. To match the recent needs, "faster, more accurate and easier", of process designers and CAE engineers, a new metal forming simulation system JSTAMP-Works/NV is developed. The JSTAMP-Works/NV packaged the automatic healing function of CAD and had much more new capabilities such as prediction of 3D trimming lines for flanging or hemming, remote control of solver execution for multi-stage forming processes and shape evaluation between FEM and CAD. On the other way, a multi-stage multi-purpose inverse FEM solver HYSTAMP is developed and will be soon put into market, which is approved to be very fast, quite accurate and robust. Lastly, authors will give some application examples of user defined ductile damage subroutine in LS-DYNA for the estimation of material failure and springback in metal forming simulation.
NASA Astrophysics Data System (ADS)
Yamada, K.; Aoki, S.; Cao, S.; Chikuma, N.; Fukuda, T.; Fukuzawa, Y.; Gonin, M.; Hayashino, T.; Hayato, Y.; Hiramoto, A.; Hosomi, F.; Inoh, T.; Iori, S.; Ishiguro, K.; Kawahara, H.; Kim, H.; Kitagawa, N.; Koga, T.; Komatani, R.; Komatsu, M.; Matsushita, A.; Mikado, S.; Minamino, A.; Mizusawa, H.; Matsumoto, T.; Matsuo, T.; Morimoto, Y.; Morishima, K.; Morishita, M.; Naganawa, N.; Nakamura, K.; Nakamura, M.; Nakamura, Y.; Nakano, T.; Nakatsuka, Y.; Nakaya, T.; Nishio, A.; Ogawa, S.; Oshima, H.; Quilain, B.; Rokujo, H.; Sato, O.; Seiya, Y.; Shibuya, H.; Shiraishi, T.; Suzuki, Y.; Tada, S.; Takahashi, S.; Yokoyama, M.; Yoshimoto, M.
2017-06-01
We describe the first ever implementation of a clock-based, multi-stage emulsion shifter in an accelerator neutrino experiment. The system was installed in the neutrino monitoring building at the Japan Proton Accelerator Research Complex as part of a test experiment, T60, and stable operation was maintained for a total of 126.6 days. By applying time information to emulsion films, various results were obtained. Time resolutions of 5.3-14.7 s were evaluated in an operation spanning 46.9 days (yielding division numbers of 1.4-3.8×105). By using timing and spatial information, reconstruction of coincident events consisting of high-multiplicity and vertex-contained events, including neutrino events, was performed. Emulsion events were matched to events observed by INGRID, one of the on-axis near detectors of the T2K experiment, with high reliability (98.5%), and hybrid analysis of the emulsion and INGRID events was established by means of the multi-stage shifter. The results demonstrate that the multi-stage shifter can feasibly be used in neutrino experiments.
Aerodynamic Analysis of Multistage Turbomachinery Flows in Support of Aerodynamic Design
NASA Technical Reports Server (NTRS)
Adamczyk, John J.
1999-01-01
This paper summarizes the state of 3D CFD based models of the time average flow field within axial flow multistage turbomachines. Emphasis is placed on models which are compatible with the industrial design environment and those models which offer the potential of providing credible results at both design and off-design operating conditions. The need to develop models which are free of aerodynamic input from semi-empirical design systems is stressed. The accuracy of such models is shown to be dependent upon their ability to account for the unsteady flow environment in multistage turbomachinery. The relevant flow physics associated with some of the unsteady flow processes present in axial flow multistage machinery are presented along with procedures which can be used to account for them in 3D CFD simulations. Sample results are presented for both axial flow compressors and axial flow turbines which help to illustrate the enhanced predictive capabilities afforded by including these procedures in 3D CFD simulations. Finally, suggestions are given for future work on the development of time average flow models.
Doktorov, Alexander B
2015-08-21
Manifestations of the "cage effect" at the encounters of reactants are theoretically treated by the example of multistage reactions in liquid solutions including bimolecular exchange reactions as elementary stages. It is shown that consistent consideration of quasi-stationary kinetics of multistage reactions (possible only in the framework of the encounter theory) for reactions proceeding near reactants contact can be made on the basis of the concepts of a "cage complex." Though mathematically such a consideration is more complicated, it is more clear from the standpoint of chemical notions. It is established that the presence of the "cage effect" leads to some important effects not inherent in reactions in gases or those in solutions proceeding in the kinetic regime, such as the appearance of new transition channels of reactant transformation that cannot be caused by elementary event of chemical conversion for the given mechanism of reaction. This results in that, for example, rate constant values of multistage reaction defined by standard kinetic equations of formal chemical kinetics from experimentally measured kinetics can differ essentially from real values of these constants.
Health condition identification of multi-stage planetary gearboxes using a mRVM-based method
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Liu, Zongyao; Wu, Xionghui; Li, Naipeng; Chen, Wu; Lin, Jing
2015-08-01
Multi-stage planetary gearboxes are widely applied in aerospace, automotive and heavy industries. Their key components, such as gears and bearings, can easily suffer from damage due to tough working environment. Health condition identification of planetary gearboxes aims to prevent accidents and save costs. This paper proposes a method based on multiclass relevance vector machine (mRVM) to identify health condition of multi-stage planetary gearboxes. In this method, a mRVM algorithm is adopted as a classifier, and two features, i.e. accumulative amplitudes of carrier orders (AACO) and energy ratio based on difference spectra (ERDS), are used as the input of the classifier to classify different health conditions of multi-stage planetary gearboxes. To test the proposed method, seven health conditions of a two-stage planetary gearbox are considered and vibration data is acquired from the planetary gearbox under different motor speeds and loading conditions. The results of three tests based on different data show that the proposed method obtains an improved identification performance and robustness compared with the existing method.
NASA Astrophysics Data System (ADS)
Kerstein, A.; Omersel, P.; Goljuf, L.; Zidaric, M.
1981-09-01
After giving a historical account of multistage rocket development in Yugoslavia, a status report is presented for the three-stage Sirius-5 program. The rocket is composed of: (1) a solid-propellant first stage, consisting of a cluster of eight standard motors yielding 220 kN thrust for 1.3 sec; (2) a mixed amines/inhibited red fuming nitric acid, bipropellant second stage generating 50 kN thrust; and (3) a third stage of the same design as the second but with only 62 kg of fuel, by contrast to 168 kg. Among the design principles adhered to are: minimization of the number of components, conservative design margins, and specifications for key subsystems based on demonstration programs. The primary use of this system is in amateur rocketry, being able to carry a 20 kg payload to 150 km.
Ermolieva, T; Filatova, T; Ermoliev, Y; Obersteiner, M; de Bruijn, K M; Jeuken, A
2017-01-01
As flood risks grow worldwide, a well-designed insurance program engaging various stakeholders becomes a vital instrument in flood risk management. The main challenge concerns the applicability of standard approaches for calculating insurance premiums of rare catastrophic losses. This article focuses on the design of a flood-loss-sharing program involving private insurance based on location-specific exposures. The analysis is guided by a developed integrated catastrophe risk management (ICRM) model consisting of a GIS-based flood model and a stochastic optimization procedure with respect to location-specific risk exposures. To achieve the stability and robustness of the program towards floods with various recurrences, the ICRM uses stochastic optimization procedure, which relies on quantile-related risk functions of a systemic insolvency involving overpayments and underpayments of the stakeholders. Two alternative ways of calculating insurance premiums are compared: the robust derived with the ICRM and the traditional average annual loss approach. The applicability of the proposed model is illustrated in a case study of a Rotterdam area outside the main flood protection system in the Netherlands. Our numerical experiments demonstrate essential advantages of the robust premiums, namely, that they: (1) guarantee the program's solvency under all relevant flood scenarios rather than one average event; (2) establish a tradeoff between the security of the program and the welfare of locations; and (3) decrease the need for other risk transfer and risk reduction measures. © 2016 Society for Risk Analysis.
High-Throughput Dietary Exposure Predictions for Chemical Migrants from Food Packaging Materials
United States Environmental Protection Agency researchers have developed a Stochastic Human Exposure and Dose Simulation High -Throughput (SHEDS-HT) model for use in prioritization of chemicals under the ExpoCast program. In this research, new methods were implemented in SHEDS-HT...
Remote sensing advances in agricultural inventories
NASA Technical Reports Server (NTRS)
Dragg, J. L.; Bizzell, R. M.; Trichel, M. C.; Hatch, R. E.; Phinney, D. E.; Baker, T. C.
1984-01-01
As the complexity of the world's agricultural industry increases, more timely and more accurate world-wide agricultural information is required to support production and marketing decisions, policy formulation, and technology development. The Inventory Technology Development Project of the AgRISTARS Program has developed new automated technology that uses data sets acquired by spaceborne remote sensors. Research has emphasized the development of multistage, multisensor sampling and estimation techniques for use in global environments where reliable ground observations are not available. This paper presents research results obtained from data sets acquired by four different sensors: Landsat MSS, Landsat TM, Shuttle-Imaging Radar and environmental satellite (AVHRR).
1971-01-01
This is a good cutaway diagram of the Saturn V launch vehicle showing the three stages, the instrument unit, and the Apollo spacecraft. The chart on the right presents the basic technical data in clear metric detail. The Saturn V is the largest and most powerful launch vehicle in the United States. The towering, 111 meter, Saturn V was a multistage, multiengine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams. Development of the Saturn V was the responsibility of the Marshall Space Flight Center at Huntsville, Alabama, directed by Dr. Wernher von Braun.
Fish Processed Production Planning Using Integer Stochastic Programming Model
NASA Astrophysics Data System (ADS)
Firmansyah, Mawengkang, Herman
2011-06-01
Fish and its processed products are the most affordable source of animal protein in the diet of most people in Indonesia. The goal in production planning is to meet customer demand over a fixed time horizon divided into planning periods by optimizing the trade-off between economic objectives such as production cost and customer satisfaction level. The major decisions are production and inventory levels for each product and the number of workforce in each planning period. In this paper we consider the management of small scale traditional business at North Sumatera Province which performs processing fish into several local seafood products. The inherent uncertainty of data (e.g. demand, fish availability), together with the sequential evolution of data over time leads the production planning problem to a nonlinear mixed-integer stochastic programming model. We use scenario generation based approach and feasible neighborhood search for solving the model. The results which show the amount of each fish processed product and the number of workforce needed in each horizon planning are presented.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
NASA Astrophysics Data System (ADS)
Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman
2017-06-01
Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
Ligand-protein docking using a quantum stochastic tunneling optimization method.
Mancera, Ricardo L; Källblad, Per; Todorov, Nikolay P
2004-04-30
A novel hybrid optimization method called quantum stochastic tunneling has been recently introduced. Here, we report its implementation within a new docking program called EasyDock and a validation with the CCDC/Astex data set of ligand-protein complexes using the PLP score to represent the ligand-protein potential energy surface and ScreenScore to score the ligand-protein binding energies. When taking the top energy-ranked ligand binding mode pose, we were able to predict the correct crystallographic ligand binding mode in up to 75% of the cases. By using this novel optimization method run times for typical docking simulations are significantly shortened. Copyright 2004 Wiley Periodicals, Inc. J Comput Chem 25: 858-864, 2004
Computational model for simulation small testing launcher, technical solution
NASA Astrophysics Data System (ADS)
Chelaru, Teodor-Viorel; Cristian, Barbu; Chelaru, Adrian
2014-12-01
The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper is focused on national project "Suborbital Launcher for Testing" (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital vehicle programs towards the current technological necessities in the space field, especially the European one.
Computational model for simulation small testing launcher, technical solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelaru, Teodor-Viorel, E-mail: teodor.chelaru@upb.ro; Cristian, Barbu, E-mail: barbucr@mta.ro; Chelaru, Adrian, E-mail: achelaru@incas.ro
The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper ismore » focused on national project 'Suborbital Launcher for Testing' (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital vehicle programs towards the current technological necessities in the space field, especially the European one.« less
Liu, Jianfeng; Laird, Carl Damon
2017-09-22
Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jianfeng; Laird, Carl Damon
Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less
A Two-Stage Stochastic Mixed-Integer Programming Approach to the Smart House Scheduling Problem
NASA Astrophysics Data System (ADS)
Ozoe, Shunsuke; Tanaka, Yoichi; Fukushima, Masao
A “Smart House” is a highly energy-optimized house equipped with photovoltaic systems (PV systems), electric battery systems, fuel cell cogeneration systems (FC systems), electric vehicles (EVs) and so on. Smart houses are attracting much attention recently thanks to their enhanced ability to save energy by making full use of renewable energy and by achieving power grid stability despite an increased power draw for installed PV systems. Yet running a smart house's power system, with its multiple power sources and power storages, is no simple task. In this paper, we consider the problem of power scheduling for a smart house with a PV system, an FC system and an EV. We formulate the problem as a mixed integer programming problem, and then extend it to a stochastic programming problem involving recourse costs to cope with uncertain electricity demand, heat demand and PV power generation. Using our method, we seek to achieve the optimal power schedule running at the minimum expected operation cost. We present some results of numerical experiments with data on real-life demands and PV power generation to show the effectiveness of our method.
An Investigation on Computer-Adaptive Multistage Testing Panels for Multidimensional Assessment
ERIC Educational Resources Information Center
Wang, Xinrui
2013-01-01
The computer-adaptive multistage testing (ca-MST) has been developed as an alternative to computerized adaptive testing (CAT), and been increasingly adopted in large-scale assessments. Current research and practice only focus on ca-MST panels for credentialing purposes. The ca-MST test mode, therefore, is designed to gauge a single scale. The…
The Empirical Selection of Anchor Items Using a Multistage Approach
ERIC Educational Resources Information Center
Craig, Brandon
2017-01-01
The purpose of this study was to determine if using a multistage approach for the empirical selection of anchor items would lead to more accurate DIF detection rates than the anchor selection methods proposed by Kopf, Zeileis, & Strobl (2015b). A simulation study was conducted in which the sample size, percentage of DIF, and balance of DIF…
ERIC Educational Resources Information Center
Wang, Keyin
2017-01-01
The comparison of item-level computerized adaptive testing (CAT) and multistage adaptive testing (MST) has been researched extensively (e.g., Kim & Plake, 1993; Luecht et al., 1996; Patsula, 1999; Jodoin, 2003; Hambleton & Xing, 2006; Keng, 2008; Zheng, 2012). Various CAT and MST designs have been investigated and compared under the same…
"MSTGen": Simulated Data Generator for Multistage Testing
ERIC Educational Resources Information Center
Han, Kyung T.
2013-01-01
Multistage testing, or MST, was developed as an alternative to computerized adaptive testing (CAT) for applications in which it is preferable to administer a test at the level of item sets (i.e., modules). As with CAT, the simulation technique in MST plays a critical role in the development and maintenance of tests. "MSTGen," a new MST…
A Comparison of IRT Proficiency Estimation Methods under Adaptive Multistage Testing
ERIC Educational Resources Information Center
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook
2015-01-01
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Identifying Differential Item Functioning in Multi-Stage Computer Adaptive Testing
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis; Li, Johnson
2013-01-01
The purpose of this study is to evaluate the performance of CATSIB (Computer Adaptive Testing-Simultaneous Item Bias Test) for detecting differential item functioning (DIF) when items in the matching and studied subtest are administered adaptively in the context of a realistic multi-stage adaptive test (MST). MST was simulated using a 4-item…
The Effects of Routing and Scoring within a Computer Adaptive Multi-Stage Framework
ERIC Educational Resources Information Center
Dallas, Andrew
2014-01-01
This dissertation examined the overall effects of routing and scoring within a computer adaptive multi-stage framework (ca-MST). Testing in a ca-MST environment has become extremely popular in the testing industry. Testing companies enjoy its efficiency benefits as compared to traditionally linear testing and its quality-control features over…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cortright, Randy; Rozmiarek, Robert; Dally, Brice
2017-08-31
The objective of this project was to develop an improved multistage process for the hydrothermal liquefaction (HTL) of biomass to serve as a new front-end, deconstruction process ideally suited to feed Virent’s well-proven catalytic technology, which is already being scaled up. This process produced water soluble, partially de-oxygenated intermediates that are ideally suited for catalytic finishing to fungible distillate hydrocarbons. Through this project, Virent, with its partners, demonstrated the conversion of pine wood chips to drop-in hydrocarbon distillate fuels using a multi-stage fractional conversion system that is integrated with Virent’s BioForming® process. The majority of work was in the liquefactionmore » task and included temperature scoping, solvent optimization, and separations.« less
Hypoplastic left heart syndrome - a review of supportive percutaneous treatment.
Moszura, Tomasz; Góreczny, Sebastian; Dryżek, Paweł
2014-01-01
Due to the complex anatomical and haemodynamic consequences of hypoplastic left heart syndrome (HLHS), patients with the condition require multistage surgical and supportive interventional treatment. Percutaneous interventions may be required between each stage of surgical palliation, sometimes simultaneously with surgery as hybrid interventions, or after completion of multistage treatment. Recent advances in the field of interventional cardiology, including new devices and techniques, have significantly contributed to improving results of multistage HLHS palliation. Knowledge of the potential interventional options as well as the limitation of percutaneous interventions will enable the creation of safe and effective treatment protocols in this highly challenging group of patients. In this comprehensive review we discuss the types, goals, and potential complications of transcatheter interventions in patients with HLHS.
Technology programs and related policies - Impacts on communications satellite business ventures
NASA Technical Reports Server (NTRS)
Greenberg, J. S.
1985-01-01
The DOMSAT II stochastic communication satellite business venture financial planning simulation model is described. The specification of business scenarios and the results of several analyses are presented. In particular, the impacts of NASA on-orbit propulsion and power technology programs are described. The effects of insurance rates and self-insurance and of the use of the Space Shuttle and Ariane transportation systems on a typical fixed satellite service business venture are discussed.
Jingjing Liang; Joseph Buongiorno; Robert A. Monserud
2006-01-01
WestProPlus is an add-in program developed to work with Microsoft Excel to simulate the growth and management of all-aged Douglas-firâwestern hemlock (Pseudotsuga menziesii (Mirb.) FrancoâTsuga heterophylla (Raf.) Sarg.) stands in Oregon and Washington. Its built-in growth model was calibrated from 2,706 permanent plots in the...
NASA Technical Reports Server (NTRS)
Goad, Clyde C.; Chadwell, C. David
1993-01-01
GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.
NASA Astrophysics Data System (ADS)
Dang, Haizheng; Tan, Jun; Zha, Rui; Li, Jiaqi; Zhang, Lei; Zhao, Yibo; Gao, Zhiqian; Bao, Dingli; Li, Ning; Zhang, Tao; Zhao, Yongjiang; Zhao, Bangjian
2017-12-01
This paper presents a review of recent advances in single- and multi-stage Stirling-type pulse tube cryocoolers (SPTCs) for space applications developed at the National Laboratory for Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences (NLIP/SITP/CAS). A variety of single-stage SPTCs operating at 25-150 K have been developed, including several mid-sized ones operating at 80-110 K. Significant progress has been achieved in coolers operating at 30-40 K which use common stainless steel meshes as regenerator matrices. Another important advance is the micro SPTCs with an overall mass of 300-800 g operating at high frequencies varying from 100 Hz to 400 Hz. The main purpose of developing two-stage SPTCs is to simultaneously acquire cooling capacities at both stages, obviating the need for auxiliary precooling in various applications. The three-stage SPTCs are developed mainly for applications at around 10 K, which are also used for precooling the J-T coolers to achieve further lower temperatures. The four-stage SPTCs are developed to directly achieve the liquid helium temperature for cooling space low-Tc superconducting devices and for the deep space exploration as well. Several typical development programs are described and an overview of the cooler performances is presented.
Prognosis model for stand development
Albert R. Stage
1973-01-01
Describes a set of computer programs for developing prognoses of the development of existing stand under alternative regimes of management. Calibration techniques, modeling procedures, and a procedure for including stochastic variation are described. Implementation of the system for lodgepole pine, including assessment of losses attributed to an infestation of mountain...
Assessing marginal water values in multipurpose multireservoir systems via stochastic programming
NASA Astrophysics Data System (ADS)
Tilmant, A.; Pinte, D.; Goor, Q.
2008-12-01
The International Conference on Water and the Environment held in Dublin in 1992 emphasized the need to consider water as an economic good. Since water markets are usually absent or ineffective, the value of water cannot be directly derived from market activities but must rather be assessed through shadow prices. Economists have developed various valuation techniques to determine the economic value of water, especially to handle allocation issues involving environmental water uses. Most of the nonmarket valuation studies reported in the literature focus on long-run policy problems, such as permanent (re)allocations of water, and assume that the water availability is given. When dealing with short-run allocation problems, water managers are facing complex spatial and temporal trade-offs and must therefore be able to track site and time changes in water values across different hydrologic conditions, especially in arid and semiarid areas where the availability of water is a limiting and stochastic factor. This paper presents a stochastic programming approach for assessing the statistical distribution of marginal water values in multipurpose multireservoir systems where hydropower generation and irrigation crop production are the main economic activities depending on water. In the absence of a water market, the Lagrange multipliers correspond to shadow prices, and the marginal water values are the Lagrange multipliers associated with the mass balance equations of the reservoirs. The methodology is illustrated with a cascade of hydroelectric-irrigation reservoirs in the Euphrates river basin in Turkey and Syria.
Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D
2002-01-01
This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.
2011-09-01
a quality evaluation with limited data, a model -based assessment must be...that affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a ...affect system performance, a multistage approach to system validation, a modeling and experimental methodology for efficiently addressing a wide range
ERIC Educational Resources Information Center
Chen, Chih-Hung; Liu, Guan-Zhi; Hwang, Gwo-Jen
2016-01-01
In this study, an integrated gaming and multistage guiding approach was proposed for conducting in-field mobile learning activities. A mobile learning system was developed based on the proposed approach. To investigate the interaction between the gaming and guiding strategies on students' learning performance and motivation, a 2 × 2 experiment was…
Rapier, Pascal M.
1982-01-01
A multi-stage flash degaser (18) is incorporated in an energy conversion system (10) having a direct-contact, binary-fluid heat exchanger to remove essentially all of the noncondensable gases from geothermal brine ahead of the direct-contact binary-fluid heat exchanger (22) in order that the heat exchanger (22) and a turbine (48) and condenser (32) of the system (10) can operate at optimal efficiency.
Architecture of the parallel hierarchical network for fast image recognition
NASA Astrophysics Data System (ADS)
Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule
2016-09-01
Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.
Zhang, Qibin; Petyuk, Vladislav A.; Schepmoes, Athena A.; Orton, Daniel J.; Monroe, Matthew E.; Yang, Feng; Smith, Richard D.; Metz, Thomas O.
2009-01-01
Non-enzymatic glycation of tissue proteins has important implications in the development of complications of diabetes mellitus. While electron transfer dissociation (ETD) has been shown to outperform collision-induced dissociation (CID) in sequencing glycated peptides by tandem mass spectrometry, ETD instrumentation is not yet widely available and often suffers from significantly lower sensitivity than CID. In this study, we evaluated different advanced CID techniques (i.e., neutral-loss-triggered MS3 and multi-stage activation) during liquid chromatography/multi-stage mass spectrometric (LC/MSn) analyses of Amadori-modified peptides enriched from human serum glycated in vitro. During neutral-loss-triggered MS3 experiments, MS3 scans triggered by neutral losses of 3 H2O or 3 H2O + HCHO produced similar results in terms of glycated peptide identifications. However, neutral losses of 3 H2O resulted in significantly more glycated peptide identifications during multi-stage activation experiments. Overall, the multi-stage activation approach produced more glycated peptide identifications, while the neutral-loss-triggered MS3 approach resulted in much higher specificity. Both techniques are viable alternatives to ETD for identifying glycated peptides. PMID:18763275
Process simulation and dynamic control for marine oily wastewater treatment using UV irradiation.
Jing, Liang; Chen, Bing; Zhang, Baiyu; Li, Pu
2015-09-15
UV irradiation and advanced oxidation processes have been recently regarded as promising solutions in removing polycyclic aromatic hydrocarbons (PAHs) from marine oily wastewater. However, such treatment methods are generally not sufficiently understood in terms of reaction mechanisms, process simulation and process control. These deficiencies can drastically hinder their application in shipping and offshore petroleum industries which produce bilge/ballast water and produced water as the main streams of marine oily wastewater. In this study, the factorial design of experiment was carried out to investigate the degradation mechanism of a typical PAH, namely naphthalene, under UV irradiation in seawater. Based on the experimental results, a three-layer feed-forward artificial neural network simulation model was developed to simulate the treatment process and to forecast the removal performance. A simulation-based dynamic mixed integer nonlinear programming (SDMINP) approach was then proposed to intelligently control the treatment process by integrating the developed simulation model, genetic algorithm and multi-stage programming. The applicability and effectiveness of the developed approach were further tested though a case study. The experimental results showed that the influences of fluence rate and temperature on the removal of naphthalene were greater than those of salinity and initial concentration. The developed simulation model could well predict the UV-induced removal process under varying conditions. The case study suggested that the SDMINP approach, with the aid of the multi-stage control strategy, was able to significantly reduce treatment cost when comparing to the traditional single-stage process optimization. The developed approach and its concept/framework have high potential of applicability in other environmental fields where a treatment process is involved and experimentation and modeling are used for process simulation and control. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.
2015-01-01
NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency of large gas turbine engines. Under ERA the task for a High Pressure Ratio Core Technology program calls for a higher overall pressure ratio of 60 to 70. This mean that the HPC would have to almost double in pressure ratio and keep its high level of efficiency. The challenge is how to match the corrected mass flow rate of the front two supersonic high reaction and high corrected tip speed stages with a total pressure ratio of 3.5. NASA and GE teamed to address this challenge by using the initial geometry of an advanced GE compressor design to meet the requirements of the first 2 stages of the very high pressure ratio core compressor. The rig was configured to run as a 2 stage machine, with Strut and IGV, Rotor 1 and Stator 1 run as independent tests which were then followed by adding the second stage. The goal is to fully understand the stage performances under isolated and multi-stage conditions and fully understand any differences and provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to isolate fluid dynamics loss source mechanisms due to interaction and endwalls. The paper will present the description of the compressor test article, its predicted performance and operability, and the experimental results for both the single stage and two stage configurations. We focus the detailed measurements on 97 and 100 of design speed at 3 vane setting angles.
STGSTK- PREDICTING MULTISTAGE AXIAL-FLOW COMPRESSOR PERFORMANCE BY A MEANLINE STAGE-STACKING METHOD
NASA Technical Reports Server (NTRS)
Steinke, R. J.
1994-01-01
The STGSTK computer program was developed for predicting the off-design performance of multistage axial-flow compressors. The axial-flow compressor is widely used in aircraft engines. In addition to its inherent advantage of high mass flow per frontal area, it can exhibit very good aerodynamic performance. However, good aerodynamic performance over an acceptable range of operating conditions is not easily attained. STGSTK provides an analytical tool for the development of new compressor designs. The simplicity of a one-dimensional compressible flow model enables the stage-stacking method used in STGSTK to have excellent convergence properties and short computer run times. Also, the simplicity of the model makes STGSTK a manageable code that eases the incorporation, or modification, of empirical correlations directly linked to test data. Thus, the user can adapt the code to meet varying design needs. STGSTK uses a meanline stage-stacking method to predict off-design performance. Stage and cumulative compressor performance is calculated from representative meanline velocity diagrams located at rotor inlet and outlet meanline radii. STGSTK includes options for the following: 1) non-dimensional stage characteristics may be input directly or calculated from stage design performance input, 2) stage characteristics may be modified for off-design speed and blade reset, and 3) rotor design deviation angle may be modified for off-design flow, speed, and blade setting angle. Many of the code's options use correlations that are normally obtained from experimental data. The STGSTK user may modify these correlations as needed. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 85K of 8 bit bytes. STGSTK was developed in 1982.
CADNA_C: A version of CADNA for use with C or C++ programs
NASA Astrophysics Data System (ADS)
Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne
2010-11-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
ERIC Educational Resources Information Center
Gorman, Kyle
2013-01-01
This dissertation outlines a program for the theory of phonotactics--the theory of speakers' knowledge of possible and impossible (or likely and unlikely) words--and argues that the alternative view of phonotactics as stochastic, and of phonotactic learning as probabilistic inference, is not capable of accounting for the facts of this domain.…
A stochastic method to characterize model uncertainty for a Nutrient TMDL
USDA-ARS?s Scientific Manuscript database
The U.S. EPA’s Total Maximum Daily Load (TMDL) program has encountered resistances in its implementation partly because of its strong dependence on mathematical models to set limitations on the release of impairing substances. The uncertainty associated with predictions of such models is often not s...
A Stochastic Method to Develop Nutrient TMDLs Using SWAT
USDA-ARS?s Scientific Manuscript database
The U.S. EPA’s Total Maximum Daily Load (TMDL) program has encountered hindrances in its implementation partly because of its strong dependence on mathematical models to set limitations on the release of impairing substances. The uncertainty associated with predictions of such models is often not fo...
ISIM3D: AN ANSI-C THREE-DIMENSIONAL MULTIPLE INDICATOR CONDITIONAL SIMULATION PROGRAM
The indicator conditional simulation technique provides stochastic simulations of a variable that (i) honor the initial data and (ii) can feature a richer family of spatial structures not limited by Gaussianity. he data are encoded into a series of indicators which then are used ...
due to the dangers of utilizing convoy operations. However, enemy actions, austere conditions, and inclement weather pose a significant risk to a...squares temporal differencing for policy evaluation. We construct a representative problem instance based on an austere combat environment in order to
Stator Indexing in Multistage Compressors
NASA Technical Reports Server (NTRS)
Barankiewicz, Wendy S.
1997-01-01
The relative circumferential location of stator rows (stator indexing) is an aspect of multistage compressor design that has not yet been explored for its potential impact on compressor aerodynamic performance. Although the inlet stages of multistage compressors usually have differing stator blade counts, the aft stages of core compressors can often have stage blocks with equal stator blade counts in successive stages. The potential impact of stator indexing is likely greatest in these stages. To assess the performance impact of stator indexing, researchers at the NASA Lewis Research Center used the 4 ft diameter, four-stage NASA Low Speed Axial Compressor for detailed experiments. This compressor has geometrically identical stages that can circumferentially index stator rows relative to each other in a controlled manner; thus it is an ideal test rig for such investigations.
Calculation of recovery plasticity in multistage hot forging under isothermal conditions.
Zhbankov, Iaroslav G; Perig, Alexander V; Aliieva, Leila I
2016-01-01
A widely used method for hot forming steels and alloys, especially heavy forging, is the process of multistage forging with pauses between stages. The well-known effect which accompanies multistage hot forging is metal plasticity recovery in comparison with monotonic deformation. A method which takes into consideration the recovery of plasticity in pauses between hot deformations of a billet under isothermal conditions is proposed. This method allows the prediction of billet forming limits as a function of deformation during the forging stage and the duration of the pause between the stages. This method takes into account the duration of pauses between deformations and the magnitude of subdivided deformations. A hot isothermal upsetting process with pauses was calculated by the proposed method. Results of the calculations have been confirmed with experimental data.
A Q-Learning Approach to Flocking With UAVs in a Stochastic Environment.
Hung, Shao-Ming; Givigi, Sidney N
2017-01-01
In the past two decades, unmanned aerial vehicles (UAVs) have demonstrated their efficacy in supporting both military and civilian applications, where tasks can be dull, dirty, dangerous, or simply too costly with conventional methods. Many of the applications contain tasks that can be executed in parallel, hence the natural progression is to deploy multiple UAVs working together as a force multiplier. However, to do so requires autonomous coordination among the UAVs, similar to swarming behaviors seen in animals and insects. This paper looks at flocking with small fixed-wing UAVs in the context of a model-free reinforcement learning problem. In particular, Peng's Q(λ) with a variable learning rate is employed by the followers to learn a control policy that facilitates flocking in a leader-follower topology. The problem is structured as a Markov decision process, where the agents are modeled as small fixed-wing UAVs that experience stochasticity due to disturbances such as winds and control noises, as well as weight and balance issues. Learned policies are compared to ones solved using stochastic optimal control (i.e., dynamic programming) by evaluating the average cost incurred during flight according to a cost function. Simulation results demonstrate the feasibility of the proposed learning approach at enabling agents to learn how to flock in a leader-follower topology, while operating in a nonstationary stochastic environment.
Overview of Reclamation's geothermal program in Imperial Valley, California
NASA Technical Reports Server (NTRS)
Fulcher, M. K.
1974-01-01
The Bureau of Reclamation is presently involved in a unique Geothermal Resource Development Program in Imperial Valley, California. The main purpose of the investigations is to determine the feasibility of providing a source of fresh water through desalting geothermal fluids stored in the aquifers underlying the valley. Significant progress in this research and development stage to date includes extensive geophysical investigations and the drilling of five geothermal wells on the Mesa anomaly. Four of the wells are for production and monitoring the anomaly, and one will be used for reinjection of waste brines from the desalting units. Two desalting units, a multistage flash unit and a vertical tube evaporator unit, have been erected at the East Mesa test site. The units have been operated on shakedown and continuous runs and have produced substantial quantities of high-quality water.
United States Air Force Summer Faculty Research Program (1987). Program Technical Report. Volume 2.
1987-12-01
the area of statistical inference, distribution theory and stochastic * •processes. I have taught courses in random processes and sample % j .functions...controlled phase separation of isotropic, binary mixtures, the theory of spinodal decomposition has been developed by Cahn and Hilliard.5 ,6 This theory is...peak and its initial rate of growth at a given temperature are predicted by the spinodal theory . The angle of maximum intensity is then determined by
NASA Astrophysics Data System (ADS)
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise
NASA Astrophysics Data System (ADS)
Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej
2010-11-01
The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.
GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2015-01-01
The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran.
New control concepts for uncertain water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Georgakakos, Aris P.; Yao, Huaming
1993-06-01
A major complicating factor in water resources systems management is handling unknown inputs. Stochastic optimization provides a sound mathematical framework but requires that enough data exist to develop statistical input representations. In cases where data records are insufficient (e.g., extreme events) or atypical of future input realizations, stochastic methods are inadequate. This article presents a control approach where input variables are only expected to belong in certain sets. The objective is to determine sets of admissible control actions guaranteeing that the system will remain within desirable bounds. The solution is based on dynamic programming and derived for the case where all sets are convex polyhedra. A companion paper (Yao and Georgakakos, this issue) addresses specific applications and problems in relation to reservoir system management.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
Hamilton-Jacobi-Bellman equations and approximate dynamic programming on time scales.
Seiffertt, John; Sanyal, Suman; Wunsch, Donald C
2008-08-01
The time scales calculus is a key emerging area of mathematics due to its potential use in a wide variety of multidisciplinary applications. We extend this calculus to approximate dynamic programming (ADP). The core backward induction algorithm of dynamic programming is extended from its traditional discrete case to all isolated time scales. Hamilton-Jacobi-Bellman equations, the solution of which is the fundamental problem in the field of dynamic programming, are motivated and proven on time scales. By drawing together the calculus of time scales and the applied area of stochastic control via ADP, we have connected two major fields of research.
1988-01-15
However. only very engineering limited experimental data exists to assess the Director, Thermal Sciences and range of validity and to direct the... experimental results of Goldstein et. al. "A 1111 and also the Navier Stokes numerical solutions of Morihara 1121. Diffuser The predicted stream function...Unsteady Aerodynamic Interactions in a Multistage Compressor............................................................ 53 I APPENDIX VI. Experimental
ERIC Educational Resources Information Center
Knechtle, Beat; Duff, Brida; Welzel, Ulrich; Kohler, Gotz
2009-01-01
In the present study, we investigated the association of anthropometric parameters with race performance in ultraendurance runners in a multistage ultraendurance run, in which athletes had to run 338 km within 5 consecutive days. In 17 male successful finishers, calculations of body mass, body height, skinfold thicknesses, extremity circumference,…
Multistage aerospace craft. [perspective drawings of conceptual design
NASA Technical Reports Server (NTRS)
Kelly, D. L. (Inventor)
1973-01-01
A conceptual design of a multi-stage aerospace craft is presented. Two perspective views of the vehicle are developed to show the two component configuration with delta wing, four vertical tail surfaces, tricycle landing gear, and two rocket exhaust nozzles at the rear of the fuselage. Engines for propulsion in the atmosphere are mounted on the fuselage in front of the wing root attachment.
1988-05-01
MEASUREMENTS IN A MULTISTAGE, HIGH SPEED COMPRESSOR by M. A. Cherrett J. D. Bryce SUMMARY The investigation of unsteady aerodynamic phenomena within high...X Vli 1. Siurii-nue. Initials 9Ia. Author 2 9b. Authors 3. 4 ... 10. Date l’ag,- ReI\\ ’ Cherrett , M.A. Bryce, J.D. May i 4 1988 4 It I
NASA Astrophysics Data System (ADS)
Danilishin, A. M.; Kozhukhov, Y. V.; Neverov, V. V.; Malev, K. G.; Mironov, Y. R.
2017-08-01
The aim of this work is the validation study for the numerical modeling of characteristics of a multistage centrifugal compressor for natural gas. In the research process was the analysis used grid interfaces and software systems. The result revealed discrepancies between the simulated and experimental characteristics and outlined the future work plan.
NASA Astrophysics Data System (ADS)
Xu, M.; Zhong, L.; Yang, Y.
2017-12-01
Under the background of neotectonics, the multistage underground flow system has been form due the different responses of main stream and tributaries to crust uplift. The coupling of multistage underground flow systems influences the development of karst thoroughly. At first, the research area is divided into vadose area, shunted area and exorheic area based on the development characteristics of transverse valley. Combining the controlling-drain action with topographic index and analyzing the coupling features of multistage underground flow system. And then, based on the coupling of multistage underground flow systems, the characteristics of deep karst development were verified by the lossing degree of surface water, water bursting and karst development characteristics of tunnels. The vadose area is regional water system based, whose deep karst developed well. It resulted the large water inflow of tunnels and the surface water drying up. The shunted area, except the region near the transverse valleys, is characterized by regional water system. The developed deep karst make the surface water connect with deep ground water well, Which caused the relatively large water flow of tunnels and the serious leakage of surface water. The deep karst relatively developed poor in the regions near transverse valleys which is characterized by local water system. The exorheic area is local water system based, whose the deep karst developed poor, as well as the connection among surface water and deep ground water. It has result in the poor lossing of the surface water under the tunnel construction. This study broadens the application field of groundwater flow systems theory, providing a new perspective for the study of Karst development theory. Meanwhile it provides theoretical guidance for hazard assessment and environmental negative effect in deep-buried Karst tunnel construction.
Marciscano, Ariel E; Huang, Judy; Tamargo, Rafael J; Hu, Chen; Khattab, Mohamed H; Aggarwal, Sameer; Lim, Michael; Redmond, Kristin J; Rigamonti, Daniele; Kleinberg, Lawrence R
2017-07-01
There is no consensus regarding the optimal management of inoperable high-grade arteriovenous malformations (AVMs). This long-term study of 42 patients with high-grade AVMs reports obliteration and adverse event (AE) rates using planned multistage repeat stereotactic radiosurgery (SRS). To evaluate the efficacy and safety of multistage SRS with treatment of the entire AVM nidus at each treatment session to achieve complete obliteration of high-grade AVMs. Patients with high-grade Spetzler-Martin (S-M) III-V AVMs treated with at least 2 multistage SRS treatments from 1989 to 2013. Clinical outcomes of obliteration rate, minor/major AEs, and treatment characteristics were collected. Forty-two patients met inclusion criteria (n = 26, S-M III; n = 13, S-M IV; n = 3, S-M V) with a median follow-up was 9.5 yr after first SRS. Median number of SRS treatment stages was 2, and median interval between stages was 3.5 yr. Twenty-two patients underwent pre-SRS embolization. Complete AVM obliteration rate was 38%, and the median time to obliteration was 9.7 yr. On multivariate analysis, higher S-M grade was significantly associated ( P = .04) failure to achieve obliteration. Twenty-seven post-SRS AEs were observed, and the post-SRS intracranial hemorrhage rate was 0.027 events per patient year. Treatment of high-grade AVMs with multistage SRS achieves AVM obliteration in a meaningful proportion of patients with acceptable AE rates. Lower obliteration rates were associated with higher S-M grade and pre-SRS embolization. This approach should be considered with caution, as partial obliteration does not protect from hemorrhage. Copyright © 2017 by the Congress of Neurological Surgeons
NASA advanced cryocooler technology development program
NASA Astrophysics Data System (ADS)
Coulter, Daniel R.; Ross, Ronald G., Jr.; Boyle, Robert F.; Key, R. W.
2003-03-01
Mechanical cryocoolers represent a significant enabling technology for NASA's Earth and Space Science Enterprises. Over the years, NASA has developed new cryocooler technologies for a wide variety of space missions. Recent achievements include the NCS, AIRS, TES and HIRDLS cryocoolers, and miniature pulse tube coolers at TRW and Lockheed Martin. The largest technology push within NASA right now is in the temperature range of 4 to 10 K. Missions such as the Next Generation Space Telescope (NGST) and Terrestrial Planet Finder (TPF) plan to use infrared detectors operating between 6-8 K, typically arsenic-doped silicon arrays, with IR telescopes from 3 to 6 meters in diameter. Similarly, Constellation-X plans to use X-ray microcalorimeters operating at 50 mK and will require ~6 K cooling to precool its multistage 50 mK magnetic refrigerator. To address cryocooler development for these next-generation missions, NASA has initiated a program referred to as the Advanced Cryocooler Technology Development Program (ACTDP). This paper presents an overview of the ACTDP program including programmatic objectives and timelines, and conceptual details of the cooler concepts under development.
Design, construction and evaluation of a 12.2 GHz, 4.0 kW-CW coupled-cavity traveling wave tube
NASA Technical Reports Server (NTRS)
Ayers, W. R.; Harman, W. A.
1973-01-01
An analytical and experimental program to study design techniques and to utilize these techniques to optimize the performance of an X-band 4 kW, CW traveling wave tube ultimately intended for satellite-borne television broadcast transmitters is described. The design is based on the coupled-cavity slow-wave circuit with velocity resynchronization to maximize the conversion efficiency. The design incorporates a collector which is demountable from the tube. This was done to facilitate multistage depressed collector experiments employing a NASA designed axisymmetric, electrostatic collector for linear beam microwave tubes after shipment of the tubes to NASA.
Preliminary compressor design study for an advanced multistage axial flow compressor
NASA Technical Reports Server (NTRS)
Marman, H. V.; Marchant, R. D.
1976-01-01
An optimum, axial flow, high pressure ratio compressor for a turbofan engine was defined for commercial subsonic transport service starting in the late 1980's. Projected 1985 technologies were used and applied to compressors with an 18:1 pressure ratio having 6 to 12 stages. A matrix of 49 compressors was developed by statistical techniques. The compressors were evaluated by means of computer programs in terms of various airline economic figures of merit such as return on investment and direct-operating cost. The optimum configuration was determined to be a high speed, 8-stage compressor with an average blading aspect ratio of 1.15.
Faultfinder: A diagnostic expert system with graceful degradation for onboard aircraft applications
NASA Technical Reports Server (NTRS)
Abbott, Kathy H.; Schutte, Paul C.; Palmer, Michael T.; Ricks, Wendell R.
1988-01-01
A research effort was conducted to explore the application of artificial intelligence technology to automation of fault monitoring and diagnosis as an aid to the flight crew. Human diagnostic reasoning was analyzed and actual accident and incident cases were reconstructed. Based on this analysis and reconstruction, diagnostic concepts were conceived and implemented for an aircraft's engine and hydraulic subsystems. These concepts are embedded within a multistage approach to diagnosis that reasons about time-based, causal, and qualitative information, and enables a certain amount of graceful degradation. The diagnostic concepts are implemented in a computer program called Faultfinder that serves as a research prototype.
Shot peening for Ti-6Al-4V alloy compressor blades
NASA Technical Reports Server (NTRS)
Carek, Gerald A.
1987-01-01
A text program was conducted to determine the effects of certain shot-peening parameters on the fatigue life of the Ti-6Al-4V alloys as well as the effect of a demarcation line on a test specimen. This demarcation line, caused by an abrupt change from untreated surface to shot-peened surface, was thought to have caused the failure of several blades in a multistage compressor at the NASA Lewis Research Center. The demarcation line had no detrimental effect upon bending fatigue specimens tested at room temperature. Procedures for shot peening Ti-6Al-4V compressor blades are recommended for future applications.
Multi-stage learning aids applied to hands-on software training.
Rother, Kristian; Rother, Magdalena; Pleus, Alexandra; Upmeier zu Belzen, Annette
2010-11-01
Delivering hands-on tutorials on bioinformatics software and web applications is a challenging didactic scenario. The main reason is that trainees have heterogeneous backgrounds, different previous knowledge and vary in learning speed. In this article, we demonstrate how multi-stage learning aids can be used to allow all trainees to progress at a similar speed. In this technique, the trainees can utilize cards with hints and answers to guide themselves self-dependently through a complex task. We have successfully conducted a tutorial for the molecular viewer PyMOL using two sets of learning aid cards. The trainees responded positively, were able to complete the task, and the trainer had spare time to respond to individual questions. This encourages us to conclude that multi-stage learning aids overcome many disadvantages of established forms of hands-on software training.
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, S A; Trunov, V I; Pestryakov, Efim V
2013-05-31
We have developed a technique for investigating the evolution of spatial inhomogeneities in high-power laser systems based on multi-stage parametric amplification. A linearised model of the inhomogeneity development is first devised for parametric amplification with the small-scale self-focusing taken into account. It is shown that the application of this model gives the results consistent (with high accuracy and in a wide range of inhomogeneity parameters) with the calculation without approximations. Using the linearised model, we have analysed the development of spatial inhomogeneities in a petawatt laser system based on multi-stage parametric amplification, developed at the Institute of Laser Physics, Siberianmore » Branch of the Russian Academy of Sciences (ILP SB RAS). (control of laser radiation parameters)« less
Tavakoli, Ali; Nikoo, Mohammad Reza; Kerachian, Reza; Soltani, Maryam
2015-04-01
In this paper, a new fuzzy methodology is developed to optimize water and waste load allocation (WWLA) in rivers under uncertainty. An interactive two-stage stochastic fuzzy programming (ITSFP) method is utilized to handle parameter uncertainties, which are expressed as fuzzy boundary intervals. An iterative linear programming (ILP) is also used for solving the nonlinear optimization model. To accurately consider the impacts of the water and waste load allocation strategies on the river water quality, a calibrated QUAL2Kw model is linked with the WWLA optimization model. The soil, water, atmosphere, and plant (SWAP) simulation model is utilized to determine the quantity and quality of each agricultural return flow. To control pollution loads of agricultural networks, it is assumed that a part of each agricultural return flow can be diverted to an evaporation pond and also another part of it can be stored in a detention pond. In detention ponds, contaminated water is exposed to solar radiation for disinfecting pathogens. Results of applying the proposed methodology to the Dez River system in the southwestern region of Iran illustrate its effectiveness and applicability for water and waste load allocation in rivers. In the planning phase, this methodology can be used for estimating the capacities of return flow diversion system and evaporation and detention ponds.
Sublinear Upper Bounds for Stochastic Programs with Recourse. Revision.
1987-06-01
approximation procedures for (1.1) generally rely on discretizations of E (Huang, Ziemba , and Ben-Tal (1977), Kall and Stoyan (1982), Birge and Wets...Wright, Practical optimization (Academic Press, London and New York,1981). C.C. Huang, W. Ziemba , and A. Ben-Tal, "Bounds on the expectation of a con
IEA/Annex II Powder Characterization Cooperative Program
1989-06-01
entatie Sampling of Particks with a Spinning Riffler. Stochastic ModeL Powder Technol., v. 19, 1978. p. 227-233 6. CHARLIER, R., and GOOSSENS , P. J. D...12222 St. Paul, Mix 55144 1 ATTN: Prof. W. A. Lanfo d ATTN: R. E. Richards State University of New York at Stony Brook, Department of Technology
Reliable models for assessing human exposures are important for understanding health risks from chemicals. The Stochastic Human Exposure and Dose Simulation model for multimedia, multi-route/pathway chemicals (SHEDS-Multimedia), developed by EPA’s Office of Research and Developm...
Studying Turbulence Using Numerical Simulation Databases. Proceedings of the 1987 Summer Program
NASA Technical Reports Server (NTRS)
Moin, Parviz (Editor); Reynolds, William C. (Editor); Kim, John (Editor)
1987-01-01
The focus was on the use of databases obtained from direct numerical simulations of turbulent flows, for study of turbulence physics and modeling. Topics addressed included: stochastic decomposition/chaos/bifurcation; two-point closure (or k-space) modeling; scalar transport/reacting flows; Reynolds stress modeling; and structure of turbulent boundary layers.
Multi-stage depressed collector for small orbit gyrotrons
Singh, Amarjit; Ives, R. Lawrence; Schumacher, Richard V.; Mizuhara, Yosuke M.
1998-01-01
A multi-stage depressed collector for receiving energy from a small orbit gyrating electron beam employs a plurality of electrodes at different potentials for sorting the individual electrons on the basis of their total energy level. Magnetic field generating coils, for producing magnetic fields and magnetic iron for magnetic field shaping produce adiabatic and controlled non-adiabatic transitions of the incident electron beam to further facilitate the sorting.
Multi-stage depressed collector for small orbit gyrotrons
Singh, A.; Ives, R.L.; Schumacher, R.V.; Mizuhara, Y.M.
1998-07-14
A multi-stage depressed collector for receiving energy from a small orbit gyrating electron beam employs a plurality of electrodes at different potentials for sorting the individual electrons on the basis of their total energy level. Magnetic field generating coils, for producing magnetic fields and magnetic iron for magnetic field shaping produce adiabatic and controlled non-adiabatic transitions of the incident electron beam to further facilitate the sorting. 9 figs.
ERIC Educational Resources Information Center
Kim, Sooyeon; Livingston, Samuel A.
2017-01-01
The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…
The automated multi-stage substructuring system for NASTRAN
NASA Technical Reports Server (NTRS)
Field, E. I.; Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.
1975-01-01
The substructuring capability developed for eventual installation in Level 16 is now operational in a test version of NASTRAN. Its features are summarized. These include the user-oriented, Case Control type control language, the automated multi-stage matrix processing, the independent direct access data storage facilities, and the static and normal modes solution capabilities. A complete problem analysis sequence is presented with card-by-card description of the user input.
Multi-stage, isothermal CO preferential oxidation reactor
Skala, Glenn William; Brundage, Mark A.; Borup, Rodney Lynn; Pettit, William Henry; Stukey, Kevin; Hart-Predmore, David James; Fairchok, Joel
2000-01-01
A multi-stage, isothermal, carbon monoxide preferential oxidation (PrOx) reactor comprising a plurality of serially arranged, catalyzed heat exchangers, each separated from the next by a mixing chamber for homogenizing the gases exiting one heat exchanger and entering the next. In a preferred embodiment, at least some of the air used in the PrOx reaction is injected directly into the mixing chamber between the catalyzed heat exchangers.
ERIC Educational Resources Information Center
Zheng, Yi; Nozawa, Yuki; Gao, Xiaohong; Chang, Hua-Hua
2012-01-01
Multistage adaptive tests (MSTs) have gained increasing popularity in recent years. MST is a balanced compromise between linear test forms (i.e., paper-and-pencil testing and computer-based testing) and traditional item-level computer-adaptive testing (CAT). It combines the advantages of both. On one hand, MST is adaptive (and therefore more…
Anantha M. Prasad; Louis R. Iverson; Stephen N. Matthews; Matthew P. Peters
2016-01-01
Context. No single model can capture the complex species range dynamics under changing climates--hence the need for a combination approach that addresses management concerns. Objective. A multistage approach is illustrated to manage forested landscapes under climate change. We combine a tree species habitat model--DISTRIB II, a species colonization model--SHIFT, and...
NASA Astrophysics Data System (ADS)
Cardoso, T.; Oliveira, M. D.; Barbosa-Póvoa, A.; Nickel, S.
2015-05-01
Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment.
Matsuzaki, Yoshio; Tachikawa, Yuya; Somekawa, Takaaki; Hatae, Toru; Matsumoto, Hiroshige; Taniguchi, Shunsuke; Sasaki, Kazunari
2015-01-01
Solid oxide fuel cells (SOFCs) are promising electrochemical devices that enable the highest fuel-to-electricity conversion efficiencies under high operating temperatures. The concept of multi-stage electrochemical oxidation using SOFCs has been proposed and studied over the past several decades for further improving the electrical efficiency. However, the improvement is limited by fuel dilution downstream of the fuel flow. Therefore, evolved technologies are required to achieve considerably higher electrical efficiencies. Here we present an innovative concept for a critically-high fuel-to-electricity conversion efficiency of up to 85% based on the lower heating value (LHV), in which a high-temperature multi-stage electrochemical oxidation is combined with a proton-conducting solid electrolyte. Switching a solid electrolyte material from a conventional oxide-ion conducting material to a proton-conducting material under the high-temperature multi-stage electrochemical oxidation mechanism has proven to be highly advantageous for the electrical efficiency. The DC efficiency of 85% (LHV) corresponds to a net AC efficiency of approximately 76% (LHV), where the net AC efficiency refers to the transmission-end AC efficiency. This evolved concept will yield a considerably higher efficiency with a much smaller generation capacity than the state-of-the-art several tens-of-MW-class most advanced combined cycle (MACC). PMID:26218470
Matsuzaki, Yoshio; Tachikawa, Yuya; Somekawa, Takaaki; Hatae, Toru; Matsumoto, Hiroshige; Taniguchi, Shunsuke; Sasaki, Kazunari
2015-07-28
Solid oxide fuel cells (SOFCs) are promising electrochemical devices that enable the highest fuel-to-electricity conversion efficiencies under high operating temperatures. The concept of multi-stage electrochemical oxidation using SOFCs has been proposed and studied over the past several decades for further improving the electrical efficiency. However, the improvement is limited by fuel dilution downstream of the fuel flow. Therefore, evolved technologies are required to achieve considerably higher electrical efficiencies. Here we present an innovative concept for a critically-high fuel-to-electricity conversion efficiency of up to 85% based on the lower heating value (LHV), in which a high-temperature multi-stage electrochemical oxidation is combined with a proton-conducting solid electrolyte. Switching a solid electrolyte material from a conventional oxide-ion conducting material to a proton-conducting material under the high-temperature multi-stage electrochemical oxidation mechanism has proven to be highly advantageous for the electrical efficiency. The DC efficiency of 85% (LHV) corresponds to a net AC efficiency of approximately 76% (LHV), where the net AC efficiency refers to the transmission-end AC efficiency. This evolved concept will yield a considerably higher efficiency with a much smaller generation capacity than the state-of-the-art several tens-of-MW-class most advanced combined cycle (MACC).
Performance Evaluation of Reduced-Chord Rotor Blading as Applied to J73 Two-Stage Turbine
NASA Technical Reports Server (NTRS)
Schurn, Harold J.
1957-01-01
The multistage turbine from the J73 turbojet engine has previously been investigated with standard and with reduced-chord rotor blading in order to determine the individual performance characteristics of each configuration over a range of over-all pressure ratio and speed. Because both turbine configurations exhibited peak efficiencies of over 90 percent, and because both units had relatively wide efficient operating ranges, it was considered of interest to determine the performance of the first stage of the turbine as a separate component. Accordingly, the standard-bladed multistage turbine was modified by removing the second-stage rotor disk and stator and altering the flow passage so that the first stage of the unit could be operated independently. The modified single-stage turbine was then operated over a range of stage pressure ratio and speed. The single-stage turbine operated at a peak brake internal efficiency of over 90 percent at an over-all stage pressure ratio of 1.4 and at 90 percent of design equivalent speed. Furthermore, the unit operated at high efficiencies over a relatively wide operating range. When the single-stage results were compared with the multistage results at the design operating point, it was found that the first stage produced approximately half the total multistage-turbine work output.
NASA Astrophysics Data System (ADS)
Fletcher, S.; Strzepek, K.
2017-12-01
Many urban water planners face increased pressure on water supply systems from increasing demands from population and economic growth in combination with uncertain water supply, driven by short-term climate variability and long-term climate change. These uncertainties are often exacerbated in groundwater-dependent water systems due to the extra difficulty in measuring groundwater storage, recharge, and sustainable yield. Groundwater models are typically under-parameterized due to the high data requirements for calibration and limited data availability, leading to uncertainty in the models' predictions. We develop an integrated approach to urban water supply planning that combines predictive groundwater uncertainty analysis with adaptive water supply planning using multi-stage decision analysis. This allows us to compare the value of collecting additional groundwater data and reducing predictive uncertainty with the value of using water infrastructure planning that is flexible, modular, and can react quickly in response to unexpected changes in groundwater availability. We apply this approach to a case from Riyadh, Saudi Arabia. Riyadh relies on fossil groundwater aquifers and desalination for urban use. The main fossil aquifers incur minimal recharge and face depletion as a result of intense withdrawals for urban and agricultural use. As the water table declines and pumping becomes uneconomical, Riyadh will have to build new supply infrastructure, decrease demand, or increase the efficiency of its distribution system. However, poor groundwater characterization has led to severe uncertainty in aquifer parameters such as hydraulic conductivity, and therefore severe uncertainty in how the water table will respond to pumping over time and when these transitions will be necessary: the potential depletion time varies from approximately five years to 100 years. This case is an excellent candidate for flexible planning both because of its severity and the potential for learning: additional information can be collected over time and flexible options exercised in response. Stochastic dynamic programming is used to find optimal policies for using flexibility under different information scenarios. The performance of each strategy is then assessed using a simulation model of Riyadh's water system.
NASA Astrophysics Data System (ADS)
Zhu, T.; Cai, X.
2013-12-01
Delay in onset of Indian summer monsoon becomes increasingly frequent. Delayed monsoon and occasional monsoon failures seriously affect agricultural production in the northeast as well as other parts of India. In the Vaishali district of the Bihar State, Monsoon rainfall is very skewed and erratic, often concentrating in shorter durations. Farmers in Vaishali reported that delayed Monsoon affected paddy planting and, consequently delayed cropping cycle, putting crops under the risks of 'terminal heat.' Canal system in the district does not function due to lack of maintenance; irrigation relies almost entirely on groundwater. Many small farmers choose not to irrigate when monsoon onset is delayed due to high diesel price, leading to reduced production or even crop failure. Some farmers adapt to delayed onset of Monsoon by planting short-duration rice, which gives the flexibility for planting the next season crops. Other sporadic autonomous adaptation activities were observed as well, with various levels of success. Adaptation recommendations and effective policy interventions are much needed. To explore robust options to adapt to the changing Monsoon regime, we build a stochastic programming model to optimize revenues of farmer groups categorized by landholding size, subject to stochastic Monsoon onset and rainfall amount. Imperfect probabilistic long-range forecast is used to inform the model onset and rainfall amount probabilities; the 'skill' of the forecasting is measured using probabilities of correctly predicting events in the past derived through hindcasting. Crop production functions are determined using self-calibrating Positive Mathematical Programming approach. The stochastic programming model aims to emulate decision-making behaviors of representative farmer agents through making choices in adaptation, including crop mix, planting dates, irrigation, and use of weather information. A set of technological and policy intervention scenarios are tested, including irrigation subsidies, drought and heat-tolerant crop varieties, and enhancing agricultural extension. A portfolio of prioritized adaption options are recommended for the study area.
Users manual for updated computer code for axial-flow compressor conceptual design
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
An existing computer code that determines the flow path for an axial-flow compressor either for a given number of stages or for a given overall pressure ratio was modified for use in air-breathing engine conceptual design studies. This code uses a rapid approximate design methodology that is based on isentropic simple radial equilibrium. Calculations are performed at constant-span-fraction locations from tip to hub. Energy addition per stage is controlled by specifying the maximum allowable values for several aerodynamic design parameters. New modeling was introduced to the code to overcome perceived limitations. Specific changes included variable rather than constant tip radius, flow path inclination added to the continuity equation, input of mass flow rate directly rather than indirectly as inlet axial velocity, solution for the exact value of overall pressure ratio rather than for any value that met or exceeded it, and internal computation of efficiency rather than the use of input values. The modified code was shown to be capable of computing efficiencies that are compatible with those of five multistage compressors and one fan that were tested experimentally. This report serves as a users manual for the revised code, Compressor Spanline Analysis (CSPAN). The modeling modifications, including two internal loss correlations, are presented. Program input and output are described. A sample case for a multistage compressor is included.
A low-power, high-efficiency Ka-band TWTA
NASA Technical Reports Server (NTRS)
Curren, A. N.; Dayton, J. A., Jr.; Palmer, R. W.; Force, D. A.; Tamashiro, R. N.; Wilson, J. F.; Dombro, L.; Harvey, W. L.
1991-01-01
A NASA-sponsored program is described for developing a high-efficiency low-power TWTA operating at 32 GHz and meeting the requirements for the Cassini Mission to study Saturn. The required RF output power of the helix TWT is 10 watts, while the dc power from the spacecraft is limited to about 30 watts. The performance level permits the transmission to earth of all mission data. Several novel technologies are incorporated into the TWT to achieve this efficiency including an advanced dynamic velocity taper characterized by a nonlinear reduction in pitch in the output helix section and a multistage depressed collector employing copper electrodes treated for secondary electron-emission suppression. Preliminary program results are encouraging: RF output power of 10.6 watts is obtained at 14-mA beam current and 5.2-kV helix voltage with overall TWT efficiency exceeding 40 percent.
Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals
2016-01-01
This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081
A low-power, high-efficiency Ka-band TWTA
NASA Astrophysics Data System (ADS)
Curren, A. N.; Dayton, J. A., Jr.; Palmer, R. W.; Force, D. A.; Tamashiro, R. N.; Wilson, J. F.; Dombro, L.; Harvey, W. L.
1991-11-01
A NASA-sponsored program is described for developing a high-efficiency low-power TWTA operating at 32 GHz and meeting the requirements for the Cassini Mission to study Saturn. The required RF output power of the helix TWT is 10 watts, while the dc power from the spacecraft is limited to about 30 watts. The performance level permits the transmission to earth of all mission data. Several novel technologies are incorporated into the TWT to achieve this efficiency including an advanced dynamic velocity taper characterized by a nonlinear reduction in pitch in the output helix section and a multistage depressed collector employing copper electrodes treated for secondary electron-emission suppression. Preliminary program results are encouraging: RF output power of 10.6 watts is obtained at 14-mA beam current and 5.2-kV helix voltage with overall TWT efficiency exceeding 40 percent.
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
Fully probabilistic control design in an adaptive critic framework.
Herzallah, Randa; Kárný, Miroslav
2011-12-01
Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.
Simulation of probabilistic wind loads and building analysis
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Chamis, Christos C.
1991-01-01
Probabilistic wind loads likely to occur on a structure during its design life are predicted. Described here is a suitable multifactor interactive equation (MFIE) model and its use in the Composite Load Spectra (CLS) computer program to simulate the wind pressure cumulative distribution functions on four sides of a building. The simulated probabilistic wind pressure load was applied to a building frame, and cumulative distribution functions of sway displacements and reliability against overturning were obtained using NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), a stochastic finite element computer code. The geometry of the building and the properties of building members were also considered as random in the NESSUS analysis. The uncertainties of wind pressure, building geometry, and member section property were qualified in terms of their respective sensitivities on the structural response.
ERIC Educational Resources Information Center
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry
2015-01-01
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
de Vries, E
2012-01-01
Members of the European Society for Immunodeficiencies (ESID) and other colleagues have updated the multi-stage expert-opinion-based diagnostic protocol for non-immunologists incorporating newly defined primary immunodeficiency diseases (PIDs). The protocol presented here aims to increase the awareness of PIDs among doctors working in different fields. Prompt identification of PID is important for prognosis, but this may not be an easy task. The protocol therefore starts from the clinical presentation of the patient. Because PIDs may present at all ages, this protocol is aimed at both adult and paediatric physicians. The multi-stage design allows cost-effective screening for PID of the large number of potential cases in the early phases, with more expensive tests reserved for definitive classification in collaboration with a specialist in the field of immunodeficiency at a later stage. PMID:22132890
Reentry trajectory optimization based on a multistage pseudospectral method.
Zhao, Jiang; Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.
Kipriyanov, Alexey A; Doktorov, Alexander B
2014-10-14
The analysis of general (matrix) kinetic equations for the mean survival probabilities of any of the species in a sample (or mean concentrations) has been made for a wide class of the multistage geminate reactions of the isolated pairs. These kinetic equations (obtained in the frame of the kinetic approach based on the concept of "effective" particles in Paper I) take into account various possible elementary reactions (stages of a multistage reaction) excluding monomolecular, but including physical and chemical processes of the change in internal quantum states carried out with the isolated pairs of reactants (or isolated reactants). The general basic principles of total and detailed balance have been established. The behavior of the reacting system has been considered on macroscopic time scales, and the universal long-term kinetics has been determined.
Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method
Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929
Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.; vanLeer, Bram
1999-01-01
A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.
NASA Astrophysics Data System (ADS)
Bao, Fei-Hong; Bao, Lei-Lei; Li, Xin-Yi; Ammar Khan, Muhammad; Wu, Hua-Ye; Qin, Feng; Zhang, Ting; Zhang, Yi; Bao, Jing-Fu; Zhang, Xiao-Sheng
2018-06-01
Thin-film piezoelectric-on-silicon acoustic wave resonators are promising for the development of system-on-chip integrated circuits with micro/nano-engineered timing reference. However, in order to realize their large potentials, a further enhancement of the quality factor (Q) is required. In this study, a novel approach, based on a multi-stage phononic crystal (PnC) structure, was proposed to achieve an ultra-high Q. A systematical study revealed that the multi-stage PnC structure formed a frequency-selective band-gap to effectively prohibit the dissipation of acoustic waves through tethers, which significantly reduced the anchor loss, leading to an insertion-loss reduction and enhancement of Q. The maximum unloaded Q u of the fabricated resonators reached the value of ∼10,000 at 109.85 MHz, indicating an enhancement by 19.4 times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doktorov, Alexander B., E-mail: doktorov@kinetics.nsc.ru
Manifestations of the “cage” effect at the encounters of reactants have been theoretically treated on the example of multistage reactions (including bimolecular exchange reactions as elementary stages) proceeding from different active sites in liquid solutions. It is shown that for reactions occurring near the contact of reactants, consistent consideration of quasi-stationary kinetics of such multistage reactions (possible in the framework of the encounter theory only) can be made on the basis of chemical concepts of the “cage complex,” just as in the case of one-site model described in the literature. Exactly as in the one-site model, the presence of themore » “cage” effect gives rise to new channels of reactant transformation that cannot result from elementary event of chemical conversion for the given reaction mechanism. Besides, the multisite model demonstrates new (as compared to one-site model) features of multistage reaction course.« less
Matsuda, Yoshiyuki; Xie, Min; Fujii, Asuka
2018-05-30
An ionization-induced multistage reaction of an ionized diethylether (DEE) dimer involving isomerization, proton transfer, and dissociation is investigated by combining infrared (IR) spectroscopy, tandem mass spectrometry, and a theoretical reaction path search. The vertically-ionized DEE dimer isomerizes to a hydrogen-bonded cluster of protonated DEE and the [DEE-H] radical through barrierless intermolecular proton transfer from the CH bond of the ionized moiety. This isomerization process is confirmed by IR spectroscopy and the theoretical reaction path search. The multiple dissociation pathways following the isomerization are analyzed by tandem mass spectrometry. The isomerized cluster dissociates stepwise into a [protonated DEE-acetaldehyde (AA)] cluster, protonated DEE, and protonated AA. The structure of the fragment ion is also analyzed by IR spectroscopy. The reaction map of the multistage processes is revealed through a harmony of these experimental and theoretical methods.
Doktorov, Alexander B; Kipriyanov, Alexey A
2014-05-14
General matrix approach to the consideration of multistage geminate reactions of isolated pairs of reactants depending on reactant mobility is formulated on the basis of the concept of "effective" particles. Various elementary reactions (stages of multistage reaction including physicochemical processes of internal quantum state changes) proceeding with the participation of isolated pairs of reactants (or isolated reactants) are taken into account. Investigation has been made in terms of kinetic approach implying the derivation of general (matrix) kinetic equations for local and mean probabilities of finding any of the reaction species in the sample under study (or for local and mean concentrations). The recipes for the calculation of kinetic coefficients of the equations for mean quantities in terms of relative coordinates of reactants have been formulated in the general case of inhomogeneous reacting systems. Important specific case of homogeneous reacting systems is considered.
Client - server programs analysis in the EPOCA environment
NASA Astrophysics Data System (ADS)
Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano
1996-09-01
Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.
Site correction of stochastic simulation in southwestern Taiwan
NASA Astrophysics Data System (ADS)
Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan
2014-05-01
Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.
A fire management simulation model using stochastic arrival times
Eric L. Smith
1987-01-01
Fire management simulation models are used to predict the impact of changes in the fire management program on fire outcomes. As with all models, the goal is to abstract reality without seriously distorting relationships between variables of interest. One important variable of fire organization performance is the length of time it takes to get suppression units to the...
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
Stochastic hyperfine interactions modeling library-Version 2
NASA Astrophysics Data System (ADS)
Zacate, Matthew O.; Evenson, William E.
2016-02-01
The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized. The original version of SHIML constructed and solved Blume matrices for methods that measure hyperfine interactions of nuclear probes in a single spin state. Version 2 provides additional support for methods that measure interactions on two different spin states such as Mössbauer spectroscopy and nuclear resonant scattering of synchrotron radiation. Example codes are provided to illustrate the use of SHIML to (1) generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A22 can be neglected and (2) generate Mössbauer spectra for polycrystalline samples for pure dipole or pure quadrupole transitions.
Probabilistic Prediction of Lifetimes of Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.
2006-01-01
ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.
Bounds on stochastic chemical kinetic systems at steady state
NASA Astrophysics Data System (ADS)
Dowdy, Garrett R.; Barton, Paul I.
2018-02-01
The method of moments has been proposed as a potential means to reduce the dimensionality of the chemical master equation (CME) appearing in stochastic chemical kinetics. However, attempts to apply the method of moments to the CME usually result in the so-called closure problem. Several authors have proposed moment closure schemes, which allow them to obtain approximations of quantities of interest, such as the mean molecular count for each species. However, these approximations have the dissatisfying feature that they come with no error bounds. This paper presents a fundamentally different approach to the closure problem in stochastic chemical kinetics. Instead of making an approximation to compute a single number for the quantity of interest, we calculate mathematically rigorous bounds on this quantity by solving semidefinite programs. These bounds provide a check on the validity of the moment closure approximations and are in some cases so tight that they effectively provide the desired quantity. In this paper, the bounded quantities of interest are the mean molecular count for each species, the variance in this count, and the probability that the count lies in an arbitrary interval. At present, we consider only steady-state probability distributions, intending to discuss the dynamic problem in a future publication.
NASA Astrophysics Data System (ADS)
Ohdachi, Satoshi; Watanabe, Kiyomasa; Sakakibara, Satoru; Suzuki, Yasuhiro; Tsuchiya, Hayato; Ming, Tingfeng; Du, Xiaodi; LHD Expriment Group Team
2014-10-01
In the Large Helical Device (LHD), the plasma is surrounded by the so-called magnetic stochastic region, where the Kolmogorov length of the magnetic field lines is very short, from several tens of meters and to thousands meters. Finite pressure gradient are formed in this region and MHD instabilities localized in this region is observed since the edge region of the LHD is always unstable against the pressure driven mode. Therefore, the saturation level of the instabilities is the key issue in order to evaluate the risk of this kind of MHD instabilities. The saturation level depends on the pressure gradient and on the magnetic Reynolds number; there results are similar to the MHD mode in the closed magnetic surface region. The saturation level in the stochastic region is affected also by the stocasticity itself. Parameter dependence of the saturation level of the MHD activities in the region is discussed in detail. It is supported by NIFS budget code ULPP021, 028 and is also partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research 26249144, by the JSPS-NRF-NSFC A3 Foresight Program NSFC: No. 11261140328.
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.
The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, C. Lindsay; Zéphyr, Luckny; Liu, Jialin
The evolution of the power system to the reliable, effi- cient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of re- newable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distri- bution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for co- optimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this frame- work, microgrids encompass consumers, distributed renewablesmore » and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the devel- opment of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic op- timization, including decomposition and stochastic dual dynamic programming.« less
Development of Optical Crystals for High Power and Tunable Visible and Infrared Light Generation
2015-02-11
ultra high chemical purity (5N), 95% isotopically enriched 6Li was purified in a multi-stage vacuum distillation process previously reported by...enriched 6Li was purified in a multi-stage vacuum distillation process previously reported by Stowe et al.[4]. 6LiIn alloy was synthesized in a... quantum mechanics, it has been determined that atoms, molecules, ions have discrete energy levels. Therefore there exists allowed atomic transitions
Fuel system for diesel engine with multi-stage heated
NASA Astrophysics Data System (ADS)
Ryzhov, Yu N.; Kuznetsov, Yu A.; Kolomeichenko, A. V.; Kuznetsov, I. S.; Solovyev, R. Yu; Sharifullin, S. N.
2017-09-01
The article describes a fuel system of a diesel engine with a construction tractor multistage heating, allowing the use of pure rapeseed oil as a diesel engine fuel. The paper identified the kinematic viscosity depending on the temperature and composition of the mixed fuel, supplemented by the existing recommendations on the use of mixed fuels based on vegetable oils and developed the device allowing use as fuel for diesel engines of biofuels based on vegetable oils.
Dynamically orthogonal field equations for stochastic flows and particle dynamics
2011-02-01
where uncertainty ‘lives’ as well as a system of Stochastic Di erential Equations that de nes how the uncertainty evolves in the time varying stochastic ... stochastic dynamical component that are both time and space dependent, we derive a system of field equations consisting of a Partial Differential Equation...a system of Stochastic Differential Equations that defines how the stochasticity evolves in the time varying stochastic subspace. These new
Handling Imbalanced Data Sets in Multistage Classification
NASA Astrophysics Data System (ADS)
López, M.
Multistage classification is a logical approach, based on a divide-and-conquer solution, for dealing with problems with a high number of classes. The classification problem is divided into several sequential steps, each one associated to a single classifier that works with subgroups of the original classes. In each level, the current set of classes is split into smaller subgroups of classes until they (the subgroups) are composed of only one class. The resulting chain of classifiers can be represented as a tree, which (1) simplifies the classification process by using fewer categories in each classifier and (2) makes it possible to combine several algorithms or use different attributes in each stage. Most of the classification algorithms can be biased in the sense of selecting the most populated class in overlapping areas of the input space. This can degrade a multistage classifier performance if the training set sample frequencies do not reflect the real prevalence in the population. Several techniques such as applying prior probabilities, assigning weights to the classes, or replicating instances have been developed to overcome this handicap. Most of them are designed for two-class (accept-reject) problems. In this article, we evaluate several of these techniques as applied to multistage classification and analyze how they can be useful for astronomy. We compare the results obtained by classifying a data set based on Hipparcos with and without these methods.
MULTI-STAGE DELIVERY NANO-PARTICLE SYSTEMS FOR THERAPEUTIC APPLICATIONS
Serda, Rita E.; Godin, Biana; Blanco, Elvin; Chiappini, Ciro; Ferrari, Mauro
2010-01-01
Background The daunting task for drug molecules to reach pathological lesions has fueled rapid advances in Nanomedicine. The progressive evolution of nanovectors has led to the development of multi-stage delivery systems aimed at overcoming the numerous obstacles encountered by nanovectors on their journey to the target site. Scope of Review This review summarizes major findings with respect to silicon-based drug delivery vectors for cancer therapeutics and imaging. Based on rational design, well established silicon technologies have been adapted for the fabrication of nanovectors with specific shapes, sizes, and porosities. These vectors are part of a multi-stage delivery system that contains multiple nano-components, each designed to achieve a specific task with the common goal of site-directed delivery of therapeutics. Major Conclusions Quasi-hemispherical and discoidal silicon microparticles are superior to spherical particles with respect to margination in the blood, with particles of different shapes and sizes having unique distributions in vivo. Cellular adhesion and internalization of silicon microparticles is influenced by microparticle shape and surface charge, with the latter dictating binding of serum opsonins. Based on in vitro cell studies, the internalization of porous silicon microparticles by endothelial cells and macrophages is compatible with cellular morphology, intracellular trafficking, mitosis, cell cycle progression, cytokine release, and cell viability. In vivo studies support superior therapeutic efficacy of liposomal encapsulated siRNA when delivered in multi-stage systems compared to free nanoparticles. PMID:20493927
Mohanty, C R; Adapala, Sivaji; Meikap, B C
2009-06-15
Sulfur dioxide and other sulfur compounds are generated as primary pollutants from the major industries such as sulfuric acid plants, cupper smelters, catalytic cracking units, etc. and cause acid rain. To remove the SO(2) from waste flue gas a three-stage counter-current multi-stage fluidized bed adsorber was developed as desulfurization equipment and operated in continuous bubbling fluidization regime for the two-phase system. This paper represents the desulfurization of gas mixtures by chemical sorption of sulfur dioxide on porous granular calcium oxide particles in the reactor at ambient temperature. The advantages of the multi-stage fluidized bed reactor are of high mass transfer and high gas-solid residence time that can enhance the removal of acid gas at low temperature by dry method. Experiments were carried out in the bubbling fluidization regime supported by visual observation. The effects of the operating parameters such as sorbent (lime) flow rate, superficial gas velocity, and the weir height on SO(2) removal efficiency in the multistage fluidized bed are reported. The results have indicated that the removal efficiency of the sulfur dioxide was found to be 65% at high solid flow rate (2.0 kg/h) corresponding to lower gas velocity (0.265 m/s), wier height of 70 mm and SO(2) concentration of 500 ppm at room temperature.
NASA Technical Reports Server (NTRS)
Shirron, Peter J.
2014-01-01
Adiabatic demagnetization refrigerators (ADR), based on the magnetocaloric effect, are solid-state coolers that were the first to achieve cooling well into the sub-kelvin regime. Although supplanted by more powerful dilution refrigerators in the 1960s, ADRs have experienced a revival due to the needs of the space community for cooling astronomical instruments and detectors to temperatures below 100 mK. The earliest of these were single-stage refrigerators using superfluid helium as a heat sink. Their modest cooling power (<1 µW at 60 mK[1]) was sufficient for the small (6x6) detector arrays[2], but recent advances in arraying and multiplexing technologies[3] are generating a need for higher cooling power (5-10 µW), and lower temperature (<30 mK). Single-stage ADRs have both practical and fundamental limits to their operating range, as mass grows very rapidly as the operating range is expanded. This has led to the development of new architectures that introduce multi-staging as a way to improve operating range, efficiency and cooling power. Multi-staging also enables ADRs to be configured for continuous operation, which greatly improves cooling power per unit mass. This paper reviews the current field of adiabatic demagnetization refrigeration, beginning with a description of the magnetocaloric effect and its application in single-stage systems, and then describing the challenges and capabilities of multi-stage and continuous ADRs.