Portfolio optimization with mean-variance model
NASA Astrophysics Data System (ADS)
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Ankirchner, Stefan; Dermoune, Azzouz
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model
NASA Astrophysics Data System (ADS)
Deng, Guang-Feng; Lin, Woo-Tsong
This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.
Bacanin, Nebojsa; Tuba, Milan
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645
Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.
Shakouri, Mahmoud; Lee, Hyun Woo
2016-03-01
The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems. PMID:26937458
Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.
Shakouri, Mahmoud; Lee, Hyun Woo
2016-03-01
The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
NASA Astrophysics Data System (ADS)
Davendralingam, Navindran
Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is
On the Endogeneity of the Mean-Variance Efficient Frontier.
ERIC Educational Resources Information Center
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
Mean-Variance Hedging on Uncertain Time Horizon in a Market with a Jump
Kharroubi, Idris; Lim, Thomas; Ngoupeyou, Armand
2013-12-15
In this work, we study the problem of mean-variance hedging with a random horizon T∧τ, where T is a deterministic constant and τ is a jump time of the underlying asset price process. We first formulate this problem as a stochastic control problem and relate it to a system of BSDEs with a jump. We then provide a verification theorem which gives the optimal strategy for the mean-variance hedging using the solution of the previous system of BSDEs. Finally, we prove that this system of BSDEs admits a solution via a decomposition approach coming from filtration enlargement theory.
Beurskens, Luuk (ECN-Energy Research Centre of the Netherland); Jansen, Jaap C. (ECN-Energy Research Centre of the Netherlands); Awerbuch, Shimon Ph.D. (.University of Sussex, Brighton, UK); Drennen, Thomas E.
2005-09-01
Energy planning represents an investment-decision problem. Investors commonly evaluate such problems using portfolio theory to manage risk and maximize portfolio performance under a variety of unpredictable economic outcomes. Energy planners need to similarly abandon their reliance on traditional, ''least-cost'' stand-alone technology cost estimates and instead evaluate conventional and renewable energy sources on the basis of their portfolio cost--their cost contribution relative to their risk contribution to a mix of generating assets. This report describes essential portfolio-theory ideas and discusses their application in the Western US region. The memo illustrates how electricity-generating mixes can benefit from additional shares of geothermal and other renewables. Compared to fossil-dominated mixes, efficient portfolios reduce generating cost while including greater renewables shares in the mix. This enhances energy security. Though counter-intuitive, the idea that adding more costly geothermal can actually reduce portfolio-generating cost is consistent with basic finance theory. An important implication is that in dynamic and uncertain environments, the relative value of generating technologies must be determined not by evaluating alternative resources, but by evaluating alternative resource portfolios. The optimal results for the Western US Region indicate that compared to the EIA target mixes, there exist generating mixes with larger geothermal shares at equal-or-lower expected cost and risk.
Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary
Zhang, Chubing
2014-01-01
This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier. PMID:24782667
Mean-variance portfolio selection for defined-contribution pension funds with stochastic salary.
Zhang, Chubing
2014-01-01
This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.
Numerical solution of continuous-time mean-variance portfolio selection with nonlinear constraints
NASA Astrophysics Data System (ADS)
Yan, Wei; Li, Shurong
2010-03-01
An investment problem is considered with dynamic mean-variance (M-V) portfolio criterion under discontinuous prices described by jump-diffusion processes. Some investment strategies are restricted in the study. This M-V portfolio with restrictions can lead to a stochastic optimal control model. The corresponding stochastic Hamilton-Jacobi-Bellman equation of the problem with linear and nonlinear constraints is derived. Numerical algorithms are presented for finding the optimal solution in this article. Finally, a computational experiment is to illustrate the proposed methods by comparing with M-V portfolio problem which does not have any constraints.
Continuous-Time Mean-Variance Portfolio Selection with Random Horizon
Yu, Zhiyong
2013-12-15
This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.
9 CFR 313.1 - Livestock pens, driveways and ramps.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Livestock pens, driveways and ramps... INSPECTION AND CERTIFICATION HUMANE SLAUGHTER OF LIVESTOCK § 313.1 Livestock pens, driveways and ramps. (a) Livestock pens, driveways and ramps shall be maintained in good repair. They shall be free from sharp...
9 CFR 313.1 - Livestock pens, driveways and ramps.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Livestock pens, driveways and ramps... INSPECTION AND CERTIFICATION HUMANE SLAUGHTER OF LIVESTOCK § 313.1 Livestock pens, driveways and ramps. (a) Livestock pens, driveways and ramps shall be maintained in good repair. They shall be free from sharp...
9 CFR 313.1 - Livestock pens, driveways and ramps.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Livestock pens, driveways and ramps... INSPECTION AND CERTIFICATION HUMANE SLAUGHTER OF LIVESTOCK § 313.1 Livestock pens, driveways and ramps. (a) Livestock pens, driveways and ramps shall be maintained in good repair. They shall be free from sharp...
9 CFR 313.1 - Livestock pens, driveways and ramps.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Livestock pens, driveways and ramps... INSPECTION AND CERTIFICATION HUMANE SLAUGHTER OF LIVESTOCK § 313.1 Livestock pens, driveways and ramps. (a) Livestock pens, driveways and ramps shall be maintained in good repair. They shall be free from sharp...
9 CFR 313.1 - Livestock pens, driveways and ramps.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Livestock pens, driveways and ramps... INSPECTION AND CERTIFICATION HUMANE SLAUGHTER OF LIVESTOCK § 313.1 Livestock pens, driveways and ramps. (a) Livestock pens, driveways and ramps shall be maintained in good repair. They shall be free from sharp...
Risk-sensitivity and the mean-variance trade-off: decision making in sensorimotor control
Nagengast, Arne J.; Braun, Daniel A.; Wolpert, Daniel M.
2011-01-01
Numerous psychophysical studies suggest that the sensorimotor system chooses actions that optimize the average cost associated with a movement. Recently, however, violations of this hypothesis have been reported in line with economic theories of decision-making that not only consider the mean payoff, but are also sensitive to risk, that is the variability of the payoff. Here, we examine the hypothesis that risk-sensitivity in sensorimotor control arises as a mean-variance trade-off in movement costs. We designed a motor task in which participants could choose between a sure motor action that resulted in a fixed amount of effort and a risky motor action that resulted in a variable amount of effort that could be either lower or higher than the fixed effort. By changing the mean effort of the risky action while experimentally fixing its variance, we determined indifference points at which participants chose equiprobably between the sure, fixed amount of effort option and the risky, variable effort option. Depending on whether participants accepted a variable effort with a mean that was higher, lower or equal to the fixed effort, they could be classified as risk-seeking, risk-averse or risk-neutral. Most subjects were risk-sensitive in our task consistent with a mean-variance trade-off in effort, thereby, underlining the importance of risk-sensitivity in computational models of sensorimotor control. PMID:21208966
43 CFR 3815.7 - Mining claims subject to stock driveway withdrawals.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Mining claims subject to stock driveway... SUBJECT TO LOCATION Mineral Locations in Stock Driveway Withdrawals § 3815.7 Mining claims subject to stock driveway withdrawals. Mining claims on lands within stock driveway withdrawals, located prior...
43 CFR 3815.7 - Mining claims subject to stock driveway withdrawals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Mining claims subject to stock driveway... SUBJECT TO LOCATION Mineral Locations in Stock Driveway Withdrawals § 3815.7 Mining claims subject to stock driveway withdrawals. Mining claims on lands within stock driveway withdrawals, located prior...
Photocopy of original blackandwhite silver gelatin print, TWELFTH STREET DRIVEWAY ...
Photocopy of original black-and-white silver gelatin print, TWELFTH STREET DRIVEWAY ENTRANCE, August 31, 1929, photographer Commercial Photo Company - Internal Revenue Service Headquarters Building, 1111 Constitution Avenue Northwest, Washington, District of Columbia, DC
FACILITY 89. FRONT OBLIQUE TAKEN FROM DRIVEWAY. VIEW FACING NORTHEAST. ...
FACILITY 89. FRONT OBLIQUE TAKEN FROM DRIVEWAY. VIEW FACING NORTHEAST. - U.S. Naval Base, Pearl Harbor, Naval Housing Area Makalapa, Junior Officers' Quarters Type K, Makin Place, & Halawa, Makalapa, & Midway Drives, Pearl City, Honolulu County, HI
7. View of south court and driveway toward main entrance; ...
7. View of south court and driveway toward main entrance; and parts of north and south wings of main building; facing east. - Mission Motel, South Court, 9235 MacArthur Boulevard, Oakland, Alameda County, CA
7. ELEVATION OF STREET (NORTH) FACADE FROM DRIVEWAY OF LOWELL'S ...
7. ELEVATION OF STREET (NORTH) FACADE FROM DRIVEWAY OF LOWELL'S FORMER RESIDENCE. NOTE BUILDERS VERTICALLY ALIGNED STEM OF BOATS WITH CORNER OF HOUSE BEHIND CAMERA POSITION. - Lowell's Boat Shop, 459 Main Street, Amesbury, Essex County, MA
2. View from the mansion formal entrance driveway toward the ...
2. View from the mansion formal entrance driveway toward the big meadow at the Billings Farm & Museum. The driveway is flanked by granite gateposts surmounted by wrought iron urn lamps. The view includes a manicured hemlock hedge (Tsuga canadensis) retained by a stone wall at left, and white birch (Betula species) under-planted with ferns at center. - Marsh-Billings-Rockefeller National Historical Park, 54 Elm Street, Woodstock, Windsor County, VT
Risk-Averse Multi-Armed Bandit Problems Under Mean-Variance Measure
NASA Astrophysics Data System (ADS)
Vakili, Sattar; Zhao, Qing
2016-09-01
The multi-armed bandit problems have been studied mainly under the measure of expected total reward accrued over a horizon of length $T$. In this paper, we address the issue of risk in multi-armed bandit problems and develop parallel results under the measure of mean-variance, a commonly adopted risk measure in economics and mathematical finance. We show that the model-specific regret and the model-independent regret in terms of the mean-variance of the reward process are lower bounded by $\\Omega(\\log T)$ and $\\Omega(T^{2/3})$, respectively. We then show that variations of the UCB policy and the DSEE policy developed for the classic risk-neutral MAB achieve these lower bounds.
D'Acremont, Mathieu; Bossaerts, Peter
2008-12-01
When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.
Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints
NASA Astrophysics Data System (ADS)
Yan, Wei
2012-01-01
An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.
Full-Depth Asphalt Pavements for Parking Lots and Driveways.
ERIC Educational Resources Information Center
Asphalt Inst., College Park, MD.
The latest information for designing full-depth asphalt pavements for parking lots and driveways is covered in relationship to the continued increase in vehicle registration. It is based on The Asphalt Institute's Thickness Design Manual, Series No. 1 (MS-1), Seventh Edition, which covers all aspects of asphalt pavement thickness design in detail,…
5. View of Clovelley Farm tenant house from driveway area ...
5. View of Clovelley Farm tenant house from driveway area on north side of house, looking northwest to back gable addition (left) and north side wall of main block. - Clovelley Farm Tenant House, 4958 Paris Road (east side), Paris, Bourbon County, KY
Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model
Morgenstern, Ingo
2016-01-01
The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor’s behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign. PMID:27351482
Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.
Jurczyk, Jan; Eckrot, Alexander; Morgenstern, Ingo
2016-01-01
The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign. PMID:27351482
Armstrong, Kerry; Thunström, Hanna; Davey, Jeremy
2013-03-01
Slow speed run-overs represent a major cause of injury and death among Australian children, with higher rates of incidents being reported in Queensland than in the remaining Australian states. Yet, little attention has been given to how caregivers develop their safety behaviour in and around the driveway setting. To address this gap, the current study aimed to develop a conceptual model of driveway child safety behaviours among caregivers of children aged 5 years or younger. Semi-structured interviews were conducted with 26 caregivers (25 females/1 male, mean age, 33.24 years) from rural and metropolitan Queensland. To enable a comparison and validation of findings from the driveway, the study analysed both driveway and domestic safety behaviours. Domestic safety behaviours were categorised and validated against driveway safety behaviours, uncovering a process of risk appraisal and safety behaviour that was applicable in both settings (the Safety System Model). However, noteworthy differences between the domestic and driveway setting were uncovered. Unlike in the domestic setting, driveway risks were perceived as shifting according the presence of moving vehicles, which resulted in inconsistent safety behaviours. While the findings require further validation, they have implications for the design and implementation of driveway run-over interventions. PMID:23298707
Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan
2016-01-01
We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns. PMID:27386363
Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.
Shinzato, Takashi
2015-01-01
In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.
Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model
Shinzato, Takashi
2015-01-01
In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach. PMID:26225761
DRAWING R100132, FIELD OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, ...
DRAWING R-1001-32, FIELD OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, SOUTH CIRCLE, CASA GRANDE REAL, AND SEQUOIA DRIVES. Ink on linen, signed by H.B. Nurse. Date has been erased, but probably June 15, 1933. Also marked "PWC 104289." - Hamilton Field, East of Nave Drive, Novato, Marin County, CA
DRAWING R100131, COMPANY OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, ...
DRAWING R-1001-31, COMPANY OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, LAS LOMAS AND BUENA VISTA DRIVES. Ink on linen, signed by H.B. Nurse. Date has been erased, but probably June 15, 1933. Also marked "PWC 104288." - Hamilton Field, East of Nave Drive, Novato, Marin County, CA
Risk modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-09-01
Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...
Code of Federal Regulations, 2012 CFR
2012-01-01
... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...
Code of Federal Regulations, 2011 CFR
2011-01-01
... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...
Code of Federal Regulations, 2010 CFR
2010-01-01
... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...
Code of Federal Regulations, 2014 CFR
2014-01-01
... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...
Code of Federal Regulations, 2013 CFR
2013-01-01
... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...
Portfolio optimization with skewness and kurtosis
NASA Astrophysics Data System (ADS)
Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-04-01
Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Algorithms for optimizing CT fluence control
NASA Astrophysics Data System (ADS)
Hsieh, Scott S.; Pelc, Norbert J.
2014-03-01
The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
Large deviations and portfolio optimization
NASA Astrophysics Data System (ADS)
Sornette, Didier
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Belief Propagation Algorithm for Portfolio Optimization Problems
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462
Inverse Optimization: A New Perspective on the Black-Litterman Model.
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch
2012-12-11
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873
Inverse Optimization: A New Perspective on the Black-Litterman Model.
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch
2012-12-11
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.
Inverse Optimization: A New Perspective on the Black-Litterman Model
Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.
2014-01-01
The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873
NASA Astrophysics Data System (ADS)
Morton de Lachapelle, David; Challet, Damien
2010-07-01
Despite the availability of very detailed data on financial markets, agent-based modeling is hindered by the lack of information about real trader behavior. This makes it impossible to validate agent-based models, which are thus reverse-engineering attempts. This work is a contribution towards building a set of stylized facts about the traders themselves. Using the client database of Swissquote Bank SA, the largest online Swiss broker, we find empirical relationships between turnover, account values and the number of assets in which a trader is invested. A theory based on simple mean-variance portfolio optimization that crucially includes variable transaction costs is able to reproduce faithfully the observed behaviors. We finally argue that our results bring to light the collective ability of a population to construct a mean-variance portfolio that takes into account the structure of transaction costs.
Kuo, Ting-chun; Mandal, Sandip; Yamauchi, Atsushi; Hsieh, Chih-hao
2016-05-01
Fishing is expected to alter the spatial heterogeneity of fishes. As an effective index to quantify spatial heterogeneity, the exponent b in Taylor's power law (V = aMb) measures how spatial variance (V) varies with changes in mean abundance (M) of a population, with larger b indicating higher spatial aggregation potential (i.e., more heterogeneity). Theory predicts b is related with life history traits, but empirical evidence is lacking. Using 50-yr spatiotemporal data from the California Current Ecosystem, we examined fishing and life history effects on Taylor's exponent by comparing spatial distributions of exploited and unexploited fishes living in the same environment. We found that unexploited species with smaller size and generation time exhibit larger b, supporting theoretical prediction. In contrast, this relationship in exploited species is much weaker, as the exponents of large exploited species were higher than unexploited species with similar traits. Our results suggest that fishing may increase spatial aggregation potential of a species, likely through degrading their size/age structure. Results of moving-window cross-correlation analyses on b vs. age structure indices (mean age and age evenness) for some exploited species corroborate our findings. Furthermore, through linking our findings to other fundamental ecological patterns (occupancy-abundance and size-abundance relationships), we provide theoretical arguments for the usefulness of monitoring the exponent b for management purposes. We propose that age/size-truncated species might have lower recovery rate in spatial occupancy, and the spatial variance-mass relationship of a species might be non-linear. Our findings provide theoretical basis explaining why fishery management strategy should be concerned with changes to the age and spatial structure of exploited fishes. PMID:27349101
Dynamics of mean-variance-skewness of cumulative crop yield impact temporal yield variance
Technology Transfer Automated Retrieval System (TEKTRAN)
Production risk associated with cropping systems influences farmers’ decisions to adopt a new management practice or a production system. Cumulative yield (CY), temporal yield variance (TYV) and coefficient of variation (CV) were used to assess the risk associated with adopting combinations of new m...
Estimating synaptic parameters from mean, variance, and covariance in trains of synaptic responses.
Scheuss, V; Neher, E
2001-01-01
Fluctuation analysis of synaptic transmission using the variance-mean approach has been restricted in the past to steady-state responses. Here we extend this method to short repetitive trains of synaptic responses, during which the response amplitudes are not stationary. We consider intervals between trains, long enough so that the system is in the same average state at the beginning of each train. This allows analysis of ensemble means and variances for each response in a train separately. Thus, modifications in synaptic efficacy during short-term plasticity can be attributed to changes in synaptic parameters. In addition, we provide practical guidelines for the analysis of the covariance between successive responses in trains. Explicit algorithms to estimate synaptic parameters are derived and tested by Monte Carlo simulations on the basis of a binomial model of synaptic transmission, allowing for quantal variability, heterogeneity in the release probability, and postsynaptic receptor saturation and desensitization. We find that the combined analysis of variance and covariance is advantageous in yielding an estimate for the number of release sites, which is independent of heterogeneity in the release probability under certain conditions. Furthermore, it allows one to calculate the apparent quantal size for each response in a sequence of stimuli. PMID:11566771
Robust Portfolio Optimization Using Pseudodistances
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.
Optimal trading strategies—a time series approach
NASA Astrophysics Data System (ADS)
Bebbington, Peter A.; Kühn, Reimer
2016-05-01
Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
NASA Astrophysics Data System (ADS)
Sun, Xuelian; Liu, Zixian
2016-02-01
In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.
Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei
2016-01-01
Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762
Carver, Charles S; Scheier, Michael F
2014-06-01
Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism.
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2005-01-01
We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.
Sejnowski, Terrence J.; Poizner, Howard; Lynch, Gary; Gepshtein, Sergei; Greenspan, Ralph J.
2014-01-01
Human performance approaches that of an ideal observer and optimal actor in some perceptual and motor tasks. These optimal abilities depend on the capacity of the cerebral cortex to store an immense amount of information and to flexibly make rapid decisions. However, behavior only approaches these limits after a long period of learning while the cerebral cortex interacts with the basal ganglia, an ancient part of the vertebrate brain that is responsible for learning sequences of actions directed toward achieving goals. Progress has been made in understanding the algorithms used by the brain during reinforcement learning, which is an online approximation of dynamic programming. Humans also make plans that depend on past experience by simulating different scenarios, which is called prospective optimization. The same brain structures in the cortex and basal ganglia that are active online during optimal behavior are also active offline during prospective optimization. The emergence of general principles and algorithms for goal-directed behavior has consequences for the development of autonomous devices in engineering applications. PMID:25328167
Lee, John R.
1975-01-01
Optimal fluoridation has been defined as that fluoride exposure which confers maximal cariostasis with minimal toxicity and its values have been previously determined to be 0.5 to 1 mg per day for infants and 1 to 1.5 mg per day for an average child. Total fluoride ingestion and urine excretion were studied in Marin County, California, children in 1973 before municipal water fluoridation. Results showed fluoride exposure to be higher than anticipated and fulfilled previously accepted criteria for optimal fluoridation. Present and future water fluoridation plans need to be reevaluated in light of total environmental fluoride exposure. PMID:1130041
NASA Technical Reports Server (NTRS)
Patterson, Michael J.; Mohajeri, Kayhan
1991-01-01
The preliminary results of a test program to optimize a neutralizer design for 30 cm xenon ion thrusters are discussed. The impact of neutralizer geometry, neutralizer axial location, and local magnetic fields on neutralizer performance is discussed. The effect of neutralizer performance on overall thruster performance is quantified, for thruster operation in the 0.5-3.2 kW power range. Additionally, these data are compared to data published for other north-south stationkeeping (NSSK) and primary propulsion xenon ion thruster neutralizers.
[SIAM conference on optimization
Not Available
1992-05-10
Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.
NASA Astrophysics Data System (ADS)
Allahverdyan, Armen E.; Hovhannisyan, Karen; Mahler, Guenter
2010-05-01
We study a refrigerator model which consists of two n -level systems interacting via a pulsed external field. Each system couples to its own thermal bath at temperatures Th and Tc , respectively (θ≡Tc/Th<1) . The refrigerator functions in two steps: thermally isolated interaction between the systems driven by the external field and isothermal relaxation back to equilibrium. There is a complementarity between the power of heat transfer from the cold bath and the efficiency: the latter nullifies when the former is maximized and vice versa. A reasonable compromise is achieved by optimizing the product of the heat-power and efficiency over the Hamiltonian of the two systems. The efficiency is then found to be bounded from below by ζCA=(1)/(1-θ)-1 (an analog of the Curzon-Ahlborn efficiency), besides being bound from above by the Carnot efficiency ζC=(1)/(1-θ)-1 . The lower bound is reached in the equilibrium limit θ→1 . The Carnot bound is reached (for a finite power and a finite amount of heat transferred per cycle) for lnn≫1 . If the above maximization is constrained by assuming homogeneous energy spectra for both systems, the efficiency is bounded from above by ζCA and converges to it for n≫1 .
RECOVERY ACT - Robust Optimization for Connectivity and Flows in Dynamic Complex Networks
Balasundaram, Balabhaskar; Butenko, Sergiy; Boginski, Vladimir; Uryasev, Stan
2013-12-25
to capture uncertainty and risk using appropriate probabilistic, statistical and optimization concepts. The main difficulty arising in addressing these issues is the dramatic increase in the computational complexity of the resulting optimization problems. This project studied novel models and methodologies for risk-averse network optimization- specifically, network design, network flows and cluster detection problems under uncertainty. The approach taken was to incorporate a quantitative risk measure known as conditional value-at-risk that is widely used in financial applications. This approach presents a viable alternate modeling and optimization framework to chance-constrained optimization and mean-variance optimization, one that also facilitates the detection of risk-averse solutions.
NASA Astrophysics Data System (ADS)
Marec, J. P.
The optimization of rendezvous and transfer orbits is introduced. Optimal transfer is defined and propulsion system modeling is outlined. Parameter optimization, including the Hohmann transfer, is discussed. Optimal transfer in general, uniform, and central gravitational fields is covered. Interplanetary rendezvous is treated.
Optimization of composite structures
NASA Technical Reports Server (NTRS)
Stroud, W. J.
1982-01-01
Structural optimization is introduced and examples which illustrate potential problems associated with optimized structures are presented. Optimized structures may have very low load carrying ability for an off design condition. They tend to have multiple modes of failure occurring simultaneously and can, therefore, be sensitive to imperfections. Because composite materials provide more design variables than do metals, they allow for more refined tailoring and more extensive optimization. As a result, optimized composite structures can be especially susceptible to these problems.
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry
Ridzal, Danis
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the areamore » of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.« less
Multidisciplinary Optimization for Aerospace Using Genetic Optimization
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.
2007-01-01
In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.
NASA Astrophysics Data System (ADS)
Marec, J. P.
Techniques for the optimization (in terms of minimal mass loss) of spacecraft trajectories are developed. The optimal transfer is defined; a model of the propulsion system is presented; the two-impulse Hohmann transfer between coplanar circular orbits is shown to be the optimal trajectory for that case; and the problems of optimal transfer in general, uniform, and central gravitational fields are analyzed. A number of specific cases are examined and illustrated with diagrams and graphs.
McGuire-Snieckus, Rebecca
2014-04-01
Optimism is generally accepted by psychiatrists, psychologists and other caring professionals as a feature of mental health. Interventions typically rely on cognitive-behavioural tools to encourage individuals to 'stop negative thought cycles' and to 'challenge unhelpful thoughts'. However, evidence suggests that most individuals have persistent biases of optimism and that excessive optimism is not conducive to mental health. How helpful is it to facilitate optimism in individuals who are likely to exhibit biases of optimism already? By locating the cause of distress at the individual level and 'unhelpful' cognitions, does this minimise wider systemic social and economic influences on mental health?
Mikhalevich, V.S.; Sergienko, I.V.; Zadiraka, V.K.; Babich, M.D.
1994-11-01
This article examines some topics of optimization of computations, which have been discussed at 25 seminar-schools and symposia organized by the V.M. Glushkov Institute of Cybernetics of the Ukrainian Academy of Sciences since 1969. We describe the main directions in the development of computational mathematics and present some of our own results that reflect a certain design conception of speed-optimal and accuracy-optimal (or nearly optimal) algorithms for various classes of problems, as well as a certain approach to optimization of computer computations.
McGuire-Snieckus, Rebecca
2014-01-01
Optimism is generally accepted by psychiatrists, psychologists and other caring professionals as a feature of mental health. Interventions typically rely on cognitive-behavioural tools to encourage individuals to ‘stop negative thought cycles’ and to ‘challenge unhelpful thoughts’. However, evidence suggests that most individuals have persistent biases of optimism and that excessive optimism is not conducive to mental health. How helpful is it to facilitate optimism in individuals who are likely to exhibit biases of optimism already? By locating the cause of distress at the individual level and ‘unhelpful’ cognitions, does this minimise wider systemic social and economic influences on mental health? PMID:25237497
Optimization of parameterized lightpipes
NASA Astrophysics Data System (ADS)
Koshel, R. John
2007-01-01
Parameterization via the bend locus curve allows optimization of single-spherical-bend lightpipes. It takes into account the bend radii, the bend ratio, allowable volume, thickness, and other terms. Parameterization of the lightpipe allows the inclusion of a constrained optimizer to maximize performance of the lightpipe. The simplex method is used for optimization. The standard and optimal simplex methods are used to maximize the standard Lambertian transmission of the lightpipe. A second case presents analogous results when the ray-sample weighted, peak-to-average irradiance uniformity is included with the static Lambertian transmission. These results are compared to a study of the constrained merit space. Results show that both optimizers can locate the optimal solution, but the optimal simplex method accomplishes such with a reduced number of ray-trace evaluations.
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Wheeler, Ward C
2003-08-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. PMID:14531408
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Analog neural nonderivative optimizers.
Teixeira, M M; Zak, S H
1998-01-01
Continuous-time neural networks for solving convex nonlinear unconstrained programming problems without using gradient information of the objective function are proposed and analyzed. Thus, the proposed networks are nonderivative optimizers. First, networks for optimizing objective functions of one variable are discussed. Then, an existing one-dimensional optimizer is analyzed, and a new line search optimizer is proposed. It is shown that the proposed optimizer network is robust in the sense that it has disturbance rejection property. The network can be implemented easily in hardware using standard circuit elements. The one-dimensional net is used as a building block in multidimensional networks for optimizing objective functions of several variables. The multidimensional nets implement a continuous version of the coordinate descent method.
Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat
2013-01-22
A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.
Homotopy optimization methods for global optimization.
Dunlavy, Daniel M.; O'Leary, Dianne P. (University of Maryland, College Park, MD)
2005-12-01
We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.
Structural optimization using optimality criteria methods
NASA Technical Reports Server (NTRS)
Khot, N. S.; Berke, L.
1984-01-01
Optimality criteria methods take advantage of some concepts as those of statically determinate or indeterminate structures, and certain variational principles of structural dynamics, to develop efficient algorithms for the sizing of structures that are subjected to stiffness-related constraints. Some of the methods and iterative strategies developed over the last decade for calculations of the Lagrange multipliers in stressand displacement-limited problems, as well as for satisfying the appropriate optimality criterion, are discussed. The application of these methods are illustrated by solving problems with stress and displacement constraints.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
Smoothers for Optimization Problems
NASA Technical Reports Server (NTRS)
Arian, Eyal; Ta'asan, Shlomo
1996-01-01
We present a multigrid one-shot algorithm, and a smoothing analysis, for the numerical solution of optimal control problems which are governed by an elliptic PDE. The analysis provides a simple tool to determine a smoothing minimization process which is essential for multigrid application. Numerical results include optimal control of boundary data using different discretization schemes and an optimal shape design problem in 2D with Dirichlet boundary conditions.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Optimizing qubit phase estimation
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François
2016-08-01
The theory of quantum state estimation is exploited here to investigate the most efficient strategies for this task, especially targeting a complete picture identifying optimal conditions in terms of Fisher information, quantum measurement, and associated estimator. The approach is specified to estimation of the phase of a qubit in a rotation around an arbitrary given axis, equivalent to estimating the phase of an arbitrary single-qubit quantum gate, both in noise-free and then in noisy conditions. In noise-free conditions, we establish the possibility of defining an optimal quantum probe, optimal quantum measurement, and optimal estimator together capable of achieving the ultimate best performance uniformly for any unknown phase. With arbitrary quantum noise, we show that in general the optimal solutions are phase dependent and require adaptive techniques for practical implementation. However, for the important case of the depolarizing noise, we again establish the possibility of a quantum probe, quantum measurement, and estimator uniformly optimal for any unknown phase. In this way, for qubit phase estimation, without and then with quantum noise, we characterize the phase-independent optimal solutions when they generally exist, and also identify the complementary conditions where the optimal solutions are phase dependent and only adaptively implementable.
Optimal Limited Contingency Planning
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Smith, David E.
2003-01-01
For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.
Algorithms for bilevel optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.
NASA Astrophysics Data System (ADS)
Dharmaseelan, Anoop; Adistambha, Keyne D.
2015-05-01
Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.
Optimal TCSC placement for optimal power flow
NASA Astrophysics Data System (ADS)
Lakdja, Fatiha; Zohra Gherbi, Fatima; Berber, Redouane; Boudjella, Houari
2012-11-01
Very few publications have been focused on the mathematical modeling of Flexible Alternating Current Transmission Systems (FACTS) -devices in optimal power flow analysis. A Thyristor Controlled Series Capacitors (TCSC) model has been proposed, and the model has been implemented in a successive QP. The mathematical models for TCSC have been established, and the Optimal Power Flow (OPF) problem with these FACTS-devices is solved by Newtons method. This article employs the Newton- based OPF-TCSC solver of MATLAB Simulator, thus it is essential to understand the development of OPF and the suitability of Newton-based algorithms for solving OPF-TCSC problem. The proposed concept was tested and validated with TCSC in twenty six-bus test system. Result shows that, when TCSC is used to relieve congestion in the system and the investment on TCSC can be recovered, with a new and original idea of integration.
Optimal control computer programs
NASA Technical Reports Server (NTRS)
Kuo, F.
1992-01-01
The solution of the optimal control problem, even with low order dynamical systems, can usually strain the analytical ability of most engineers. The understanding of this subject matter, therefore, would be greatly enhanced if a software package existed that could simulate simple generic problems. Surprisingly, despite a great abundance of commercially available control software, few, if any, address the part of optimal control in its most generic form. The purpose of this paper is, therefore, to present a simple computer program that will perform simulations of optimal control problems that arise from the first necessary condition and the Pontryagin's maximum principle.
Thermophotovoltaic Array Optimization
SBurger; E Brown; K Rahner; L Danielson; J Openlander; J Vell; D Siganporia
2004-07-29
A systematic approach to thermophotovoltaic (TPV) array design and fabrication was used to optimize the performance of a 192-cell TPV array. The systematic approach began with cell selection criteria that ranked cells and then matched cell characteristics to maximize power output. Following cell selection, optimization continued with an array packaging design and fabrication techniques that introduced negligible electrical interconnect resistance and minimal parasitic losses while maintaining original cell electrical performance. This paper describes the cell selection and packaging aspects of array optimization as applied to fabrication of a 192-cell array.
2014-05-13
ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used tomore » solve optimal design problems and inverse problems based on a variety of simulation software.« less
Contingency contractor optimization.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Detry, Richard Joseph; Nanco, Alan Stewart; Nozick, Linda Karen
2013-10-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Contingency contractor optimization.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Detry, Richard Joseph; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Nanco, Alan Stewart; Nozick, Linda Karen
2013-06-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
NASA Astrophysics Data System (ADS)
Furniss, S. G.
1989-10-01
While an SSTO with airbreathing propulsion for initial acceleration may greatly reduce future payload launch costs, such vehicles exhibit extreme sensitivity to design assumptions; the process of vehicle optimization is, accordingly, a difficult one. Attention is presently given to the role in optimization of the design mission, fuselage geometry, and the means employed to furnish adequate pitch and directional control. The requirements influencing wing design and scaling are also discussed. The Saenger and Hotol designs are the illustrative cases noted in this generalizing consideration of the SSTO-optimization process.
Library for Nonlinear Optimization
2001-10-09
OPT++ is a C++ object-oriented library for nonlinear optimization. This incorporates an improved implementation of an existing capability and two new algorithmic capabilities based on existing journal articles and freely available software.
Alicia Hofler; Pavel Evtushenko
2007-07-03
Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. A genetic algorithm (GA) has been successfully used to optimize DC photo injector designs at Cornell University [1] and Jefferson Lab [2]. We propose to apply GA techniques to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize a system that has been benchmarked with beam measurements and simulation.
General shape optimization capability
NASA Technical Reports Server (NTRS)
Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson
1991-01-01
A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.
A. S. Hofler; P. Evtushenko; M. Krasilnikov
2007-08-01
Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. RF and SRF gun design is further complicated because the bunches are space charge dominated and require additional emittance compensation. A genetic algorithm has been successfully used to optimize DC photo injector designs for Cornell* and Jefferson Lab**, and we propose studying how the genetic algorithm techniques can be applied to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize gun designs that have been benchmarked with beam measurements and simulation.
Topology optimized microbioreactors.
Schäpper, Daniel; Lencastre Fernandes, Rita; Lantz, Anna Eliasson; Okkels, Fridolin; Bruus, Henrik; Gernaey, Krist V
2011-04-01
This article presents the fusion of two hitherto unrelated fields--microbioreactors and topology optimization. The basis for this study is a rectangular microbioreactor with homogeneously distributed immobilized brewers yeast cells (Saccharomyces cerevisiae) that produce a recombinant protein. Topology optimization is then used to change the spatial distribution of cells in the reactor in order to optimize for maximal product flow out of the reactor. This distribution accounts for potentially negative effects of, for example, by-product inhibition. We show that the theoretical improvement in productivity is at least fivefold compared with the homogeneous reactor. The improvements obtained by applying topology optimization are largest where either nutrition is scarce or inhibition effects are pronounced.
TOOLKIT FOR ADVANCED OPTIMIZATION
2000-10-13
The TAO project focuses on the development of software for large scale optimization problems. TAO uses an object-oriented design to create a flexible toolkit with strong emphasis on the reuse of external tools where appropriate. Our design enables bi-directional connection to lower level linear algebra support (for example, parallel sparse matrix data structures) as well as higher level application frameworks. The Toolkist for Advanced Optimization (TAO) is aimed at teh solution of large-scale optimization problemsmore » on high-performance architectures. Our main goals are portability, performance, scalable parallelism, and an interface independent of the architecture. TAO is suitable for both single-processor and massively-parallel architectures. The current version of TAO has algorithms for unconstrained and bound-constrained optimization.« less
Modeling using optimization routines
NASA Technical Reports Server (NTRS)
Thomas, Theodore
1995-01-01
Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.
Kawase, Mitsuhiro
2009-11-22
The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.
NASA Astrophysics Data System (ADS)
Wecker, Dave; Hastings, Matthew B.; Troyer, Matthias
2016-08-01
We study a variant of the quantum approximate optimization algorithm [E. Farhi, J. Goldstone, and S. Gutmann, arXiv:1411.4028] with a slightly different parametrization and a different objective: rather than looking for a state which approximately solves an optimization problem, our goal is to find a quantum algorithm that, given an instance of the maximum 2-satisfiability problem (MAX-2-SAT), will produce a state with high overlap with the optimal state. Using a machine learning approach, we chose a "training set" of instances and optimized the parameters to produce a large overlap for the training set. We then tested these optimized parameters on a larger instance set. As a training set, we used a subset of the hard instances studied by Crosson, Farhi, C. Y.-Y. Lin, H.-H. Lin, and P. Shor (CFLLS) (arXiv:1401.7320). When tested, on the full set, the parameters that we find produce a significantly larger overlap than the optimized annealing times of CFLLS. Testing on other random instances from 20 to 28 bits continues to show improvement over annealing, with the improvement being most notable on the hardest instances. Further tests on instances of MAX-3-SAT also showed improvement on the hardest instances. This algorithm may be a possible application for near-term quantum computers with limited coherence times.
Thermoacoustic Refrigerator's Stack Optimization
NASA Astrophysics Data System (ADS)
El-Fawal, Mawahib Hassan; Mohd-Ghazali, Normah; Yaacob, Mohd. Shafik; Darus, Amer Nordin
2010-06-01
The standing wave thermoacoustic refrigerator, which uses sound generation to transfer heat, was developed rapidly during the past four decades. It was regarded as a new, promising and environmentally benign alternative to conventional compression vapor refrigerators, although it was not competitive regarding the coefficient of performance (COP) yet. Thus the aim of this paper is to enhance thermoacoustic refrigerator's stack performance through optimization. A computational optimization procedure of thermoacoustic stack design was fully developed. The procedure was designed to achieve optimal coefficient of performance based on most of the design and operating parameters. Cooling load and acoustic power governing equations were set assuming the linear thermoacoustic theory. Lagrange multipliers method was used as an optimization technique tool to solve the governing equations. Numerical analyses results of the developed design procedure are presented. The results showed that the stack design parameters are the most significant parameters for the optimal overall performance. The coefficient of performance obtained increases by about 48.8% from the published experimental optimization methods. The results are in good agreement with past established studies.
Cyclone performance and optimization
Leith, D.
1990-09-15
The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. This quarter, an empirical model for predicting pressure drop across a cyclone was developed through a statistical analysis of pressure drop data for 98 cyclone designs. The model is shown to perform better than the pressure drop models of First (1950), Alexander (1949), Barth (1956), Stairmand (1949), and Shepherd-Lapple (1940). This model is used with the efficiency model of Iozia and Leith (1990) to develop an optimization curve which predicts the minimum pressure drop and the dimension rations of the optimized cyclone for a given aerodynamic cut diameter, d{sub 50}. The effect of variation in cyclone height, cyclone diameter, and flow on the optimization curve is determined. The optimization results are used to develop a design procedure for optimized cyclones. 37 refs., 10 figs., 4 tabs.
Regularizing portfolio optimization
NASA Astrophysics Data System (ADS)
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
φq-field theory for portfolio optimization: “fat tails” and nonlinear correlations
NASA Astrophysics Data System (ADS)
Sornette, D.; Simonetti, P.; Andersen, J. V.
2000-08-01
Physics and finance are both fundamentally based on the theory of random walks (and their generalizations to higher dimensions) and on the collective behavior of large numbers of correlated variables. The archetype examplifying this situation in finance is the portfolio optimization problem in which one desires to diversify on a set of possibly dependent assets to optimize the return and minimize the risks. The standard mean-variance solution introduced by Markovitz and its subsequent developments is basically a mean-field Gaussian solution. It has severe limitations for practical applications due to the strongly non-Gaussian structure of distributions and the nonlinear dependence between assets. Here, we present in details a general analytical characterization of the distribution of returns for a portfolio constituted of assets whose returns are described by an arbitrary joint multivariate distribution. In this goal, we introduce a non-linear transformation that maps the returns onto Gaussian variables whose covariance matrix provides a new measure of dependence between the non-normal returns, generalizing the covariance matrix into a nonlinear covariance matrix. This nonlinear covariance matrix is chiseled to the specific fat tail structure of the underlying marginal distributions, thus ensuring stability and good conditioning. The portfolio distribution is then obtained as the solution of a mapping to a so-called φq field theory in particle physics, of which we offer an extensive treatment using Feynman diagrammatic techniques and large deviation theory, that we illustrate in details for multivariate Weibull distributions. The interaction (non-mean field) structure in this field theory is a direct consequence of the non-Gaussian nature of the distribution of asset price returns. We find that minimizing the portfolio variance (i.e. the relatively “small” risks) may often increase the large risks, as measured by higher normalized cumulants. Extensive
Optimization of Metronidazole Emulgel
Rao, Monica; Sukre, Girish; Aghav, Sheetal; Kumar, Manmeet
2013-01-01
The purpose of the present study was to develop and optimize the emulgel system for MTZ (Metronidazole), a poorly water soluble drug. The pseudoternary phase diagrams were developed for various microemulsion formulations composed of Capmul 908 P, Acconon MC8-2, and propylene glycol. The emulgel was optimized using a three-factor, two-level factorial design, the independent variables selected were Capmul 908 P, and surfactant mixture (Acconon MC8-2 and gelling agent), and the dependent variables (responses) were a cumulative amount of drug permeated across the dialysis membrane in 24 h (Y1) and spreadability (Y2). Mathematical equations and response surface plots were used to relate the dependent and independent variables. The regression equations were generated for responses Y1 and Y2. The statistical validity of the polynomials was established, and optimized formulation factors were selected. Validation of the optimization study with 3 confirmatory runs indicated a high degree of prognostic ability of response surface methodology. Emulgel system of MTZ was developed and optimized using 23 factorial design and could provide an effective treatment against topical infections. PMID:26555982
1998-07-01
GenOpt is a generic optimization program for nonlinear, constrained optimization. For evaluating the objective function, any simulation program that communicates over text files can be coupled to GenOpt without code modification. No analytic properties of the objective function are used by GenOpt. ptimization algorithms and numerical methods can be implemented in a library and shared among users. Gencpt offers an interlace between the optimization algorithm and its kernel to make the implementation of new algorithmsmore » fast and easy. Different algorithms of constrained and unconstrained minimization can be added to a library. Algorithms for approximation derivatives and performing line-search will be implemented. The objective function is evaluated as a black-box function by an external simulation program. The kernel of GenOpt deals with the data I/O, result sotrage and report, interlace to the external simulation program, and error handling. An abstract optimization class offers methods to interface the GenOpt kernel and the optimization algorithm library.« less
Optimization of Heat Exchangers
Ivan Catton
2010-10-01
The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics )pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger disign.
NASA Technical Reports Server (NTRS)
Demmel, J.; Lafferriere, G.
1989-01-01
Consideration is given to the problem of optimal force distribution among three point fingers holding a planar object. A scheme that reduces the nonlinear optimization problem to an easily solved generalized eigenvalue problem is proposed. This scheme generalizes and simplifies results of Ji and Roth (1988). The generalizations include all possible geometric arrangements and extensions to three dimensions and to the case of variable coefficients of friction. For the two-dimensional case with constant coefficients of friction, it is proved that, except for some special cases, the optimal grasping forces (in the sense of minimizing the dependence on friction) are those for which the angles with the corresponding normals are all equal (in absolute value).
Optimal symmetric flight studies
NASA Technical Reports Server (NTRS)
Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.
1985-01-01
Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.
McMordie Stoughton, Kate; Duan, Xiaoli; Wendel, Emily M.
2013-08-26
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). ¬The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.¬
2013-08-01
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.
Johnson, E.A.; Leung, C.; Schira, J.J.
1983-03-01
A closed loop timing optimization control for an internal combustion engine closed about the instantaneous rotational velocity of the engine's crankshaft is disclosed herein. The optimization control computes from the instantaneous rotational velocity of the engine's crankshaft, a signal indicative of the angle at which the crankshaft has a maximum rotational velocity for the torque impulses imparted to the engine's crankshaft by the burning of an air/fuel mixture in each of the engine's combustion chambers and generates a timing correction signal for each of the engine's combustion chambers. The timing correction signals, applied to the engine timing control, modifies the time at which the ignition signal, injection signals or both are generated such that the rotational velocity of the engine's crankshaft has a maximum value at a predetermined angle for each torque impulse generated optimizing the conversion of the combustion energy to rotational torque.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Optimization of Combinatorial Mutagenesis
NASA Astrophysics Data System (ADS)
Parker, Andrew S.; Griswold, Karl E.; Bailey-Kellogg, Chris
Protein engineering by combinatorial site-directed mutagenesis evaluates a portion of the sequence space near a target protein, seeking variants with improved properties (stability, activity, immunogenicity, etc.). In order to improve the hit-rate of beneficial variants in such mutagenesis libraries, we develop methods to select optimal positions and corresponding sets of the mutations that will be used, in all combinations, in constructing a library for experimental evaluation. Our approach, OCoM (Optimization of Combinatorial Mutagenesis), encompasses both degenerate oligonucleotides and specified point mutations, and can be directed accordingly by requirements of experimental cost and library size. It evaluates the quality of the resulting library by one- and two-body sequence potentials, averaged over the variants. To ensure that it is not simply recapitulating extant sequences, it balances the quality of a library with an explicit evaluation of the novelty of its members. We show that, despite dealing with a combinatorial set of variants, in our approach the resulting library optimization problem is actually isomorphic to single-variant optimization. By the same token, this means that the two-body sequence potential results in an NP-hard optimization problem. We present an efficient dynamic programming algorithm for the one-body case and a practically-efficient integer programming approach for the general two-body case. We demonstrate the effectiveness of our approach in designing libraries for three different case study proteins targeted by previous combinatorial libraries - a green fluorescent protein, a cytochrome P450, and a beta lactamase. We found that OCoM worked quite efficiently in practice, requiring only 1 hour even for the massive design problem of selecting 18 mutations to generate 107 variants of a 443-residue P450. We demonstrate the general ability of OCoM in enabling the protein engineer to explore and evaluate trade-offs between quality and
NASA Astrophysics Data System (ADS)
Klesh, Andrew T.
This dissertation studies optimal exploration, defined as the collection of information about given objects of interest by a mobile agent (the explorer) using imperfect sensors. The key aspects of exploration are kinematics (which determine how the explorer moves in response to steering commands), energetics (which determine how much energy is consumed by motion and maneuvers), informatics (which determine the rate at which information is collected) and estimation (which determines the states of the objects). These aspects are coupled by the steering decisions of the explorer. We seek to improve exploration by finding trade-offs amongst these couplings and the components of exploration: the Mission, the Path and the Agent. A comprehensive model of exploration is presented that, on one hand, accounts for these couplings and on the other hand is simple enough to allow analysis. This model is utilized to pose and solve several exploration problems where an objective function is to be minimized. Specific functions to be considered are the mission duration and the total energy. These exploration problems are formulated as optimal control problems and necessary conditions for optimality are obtained in the form of two-point boundary value problems. An analysis of these problems reveals characteristics of optimal exploration paths. Several regimes are identified for the optimal paths including the Watchtower, Solar and Drag regime, and several non-dimensional parameters are derived that determine the appropriate regime of travel. The so-called Power Ratio is shown to predict the qualitative features of the optimal paths, provide a metric to evaluate an aircrafts design and determine an aircrafts capability for flying perpetually. Optimal exploration system drivers are identified that provide perspective as to the importance of these various regimes of flight. A bank-to-turn solar-powered aircraft flying at constant altitude on Mars is used as a specific platform for
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Terascale Optimal PDE Simulations
David Keyes
2009-07-28
The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.
NASA Astrophysics Data System (ADS)
Ouaknin, Gaddiel; Laachi, Nabil; Delaney, Kris; Fredrickson, Glenn; Gibou, Frederic
2016-03-01
Directed self-assembly using block copolymers for positioning vertical interconnect access in integrated circuits relies on the proper shape of a confined domain in which polymers will self-assemble into the targeted design. Finding that shape, i.e., solving the inverse problem, is currently mainly based on trial and error approaches. We introduce a level-set based algorithm that makes use of a shape optimization strategy coupled with self-consistent field theory to solve the inverse problem in an automated way. It is shown that optimal shapes are found for different targeted topologies with accurate placement and distances between the different components.
Optimal Quantum Phase Estimation
Dorner, U.; Smith, B. J.; Lundeen, J. S.; Walmsley, I. A.; Demkowicz-Dobrzanski, R.; Banaszek, K.; Wasilewski, W.
2009-01-30
By using a systematic optimization approach, we determine quantum states of light with definite photon number leading to the best possible precision in optical two-mode interferometry. Our treatment takes into account the experimentally relevant situation of photon losses. Our results thus reveal the benchmark for precision in optical interferometry. Although this boundary is generally worse than the Heisenberg limit, we show that the obtained precision beats the standard quantum limit, thus leading to a significant improvement compared to classical interferometers. We furthermore discuss alternative states and strategies to the optimized states which are easier to generate at the cost of only slightly lower precision.
Space-vehicle trajectories - Optimization
NASA Astrophysics Data System (ADS)
Marec, J. P.
The application of control-theory optimization techniques to the motion of powered vehicles in space is discussed in an analytical review. Problems addressed include the definition of optimal orbital transfer; propulsion-system modeling; parametric optimization and the Hohmann transfer; optimal transfer in general, uniform, and central gravitational fields; and interplanetary rendezvous. Typical numerical results are presented in graphs and briefly characterized.
Toward Optimal Transport Networks
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Kincaid, Rex K.; Vargo, Erik P.
2008-01-01
Strictly evolutionary approaches to improving the air transport system a highly complex network of interacting systems no longer suffice in the face of demand that is projected to double or triple in the near future. Thus evolutionary approaches should be augmented with active design methods. The ability to actively design, optimize and control a system presupposes the existence of predictive modeling and reasonably well-defined functional dependences between the controllable variables of the system and objective and constraint functions for optimization. Following recent advances in the studies of the effects of network topology structure on dynamics, we investigate the performance of dynamic processes on transport networks as a function of the first nontrivial eigenvalue of the network's Laplacian, which, in turn, is a function of the network s connectivity and modularity. The last two characteristics can be controlled and tuned via optimization. We consider design optimization problem formulations. We have developed a flexible simulation of network topology coupled with flows on the network for use as a platform for computational experiments.
ERIC Educational Resources Information Center
Homan, Michael; Worley, Penny
This course syllabus describes methods for optimizing online searching, using as an example searching on the National Library of Medicine (NLM) online system. Four major activities considered are the online interview, query analysis and search planning, online interaction, and post-search analysis. Within the context of these activities, concepts…
NASA Astrophysics Data System (ADS)
Huang, Siendong
2009-11-01
The nonlocality of quantum states on a bipartite system \\mathcal {A+B} is tested by comparing probabilistic outcomes of two local observables of different subsystems. For a fixed observable A of the subsystem \\mathcal {A,} its optimal approximate double A' of the other system \\mathcal {B} is defined such that the probabilistic outcomes of A' are almost similar to those of the fixed observable A. The case of σ-finite standard von Neumann algebras is considered and the optimal approximate double A' of an observable A is explicitly determined. The connection between optimal approximate doubles and quantum correlations is explained. Inspired by quantum states with perfect correlation, like Einstein-Podolsky-Rosen states and Bohm states, the nonlocality power of an observable A for general quantum states is defined as the similarity that the outcomes of A look like the properties of the subsystem \\mathcal {B} corresponding to A'. As an application of optimal approximate doubles, maximal Bell correlation of a pure entangled state on \\mathcal {B}(\\mathbb {C}^{2})\\otimes \\mathcal {B}(\\mathbb {C}^{2}) is found explicitly.
Optimization of digital designs
NASA Technical Reports Server (NTRS)
Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor)
2009-01-01
An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.
Fourier Series Optimization Opportunity
ERIC Educational Resources Information Center
Winkel, Brian
2008-01-01
This note discusses the introduction of Fourier series as an immediate application of optimization of a function of more than one variable. Specifically, it is shown how the study of Fourier series can be motivated to enrich a multivariable calculus class. This is done through discovery learning and use of technology wherein students build the…
ERIC Educational Resources Information Center
Cody, Martin L.
1974-01-01
Discusses the optimality of natural selection, ways of testing for optimum solutions to problems of time - or energy-allocation in nature, optimum patterns in spatial distribution and diet breadth, and how best to travel over a feeding area so that food intake is maximized. (JR)
Optimal ciliary beating patterns
NASA Astrophysics Data System (ADS)
Vilfan, Andrej; Osterman, Natan
2011-11-01
We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.
Optimizing Conferencing Freeware
ERIC Educational Resources Information Center
Baggaley, Jon; Klaas, Jim; Wark, Norine; Depow, Jim
2005-01-01
The increasing range of options provided by two popular conferencing freeware products, "Yahoo Messenger" and "MSN Messenger," are discussed. Each tool contains features designed primarily for entertainment purposes, which can be customized for use in online education. This report provides suggestions for optimizing the educational potential of…
Accelerating Lead Compound Optimization.
Poh, Alissa
2016-04-01
Chemists at The Scripps Research Institute in La Jolla, CA, and Pfizer's La Jolla Laboratories have devised a new way to rapidly synthesize strained-ring structures, which are increasingly favored to optimize potential drugs. With this method, strain-release amination, Pfizer researchers were able to produce sufficient quantities of a particular structure they needed to evaluate a promising cancer drug candidate.
ERIC Educational Resources Information Center
Simmons, Joseph P.; Massey, Cade
2012-01-01
Is optimism real, or are optimistic forecasts just cheap talk? To help answer this question, we investigated whether optimistic predictions persist in the face of large incentives to be accurate. We asked National Football League football fans to predict the winner of a single game. Roughly half (the partisans) predicted a game involving their…
ERIC Educational Resources Information Center
Rebilas, Krzysztof
2013-01-01
Consider a skier who goes down a takeoff ramp, attains a speed "V", and jumps, attempting to land as far as possible down the hill below (Fig. 1). At the moment of takeoff the angle between the skier's velocity and the horizontal is [alpha]. What is the optimal angle [alpha] that makes the jump the longest possible for the fixed magnitude of the…
Optimization of Systran System.
ERIC Educational Resources Information Center
Toma, Peter P.; And Others
This report describes an optimization phase of the SYSTRAN (System Translation) machine translation technique. The most distinctive characteristic of SYSTRAN is the absence of pre-editing; the program reads tapes containing raw and unedited Russian texts, carries out dictionary and table lookups, performs all syntactic analysis procedures, and…
Optimization in Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Marsden, Alison L.
2014-01-01
Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed
(Too) optimistic about optimism: the belief that optimism improves performance.
Tenney, Elizabeth R; Logg, Jennifer M; Moore, Don A
2015-03-01
A series of experiments investigated why people value optimism and whether they are right to do so. In Experiments 1A and 1B, participants prescribed more optimism for someone implementing decisions than for someone deliberating, indicating that people prescribe optimism selectively, when it can affect performance. Furthermore, participants believed optimism improved outcomes when a person's actions had considerable, rather than little, influence over the outcome (Experiment 2). Experiments 3 and 4 tested the accuracy of this belief; optimism improved persistence, but it did not improve performance as much as participants expected. Experiments 5A and 5B found that participants overestimated the relationship between optimism and performance even when their focus was not on optimism exclusively. In summary, people prescribe optimism when they believe it has the opportunity to improve the chance of success-unfortunately, people may be overly optimistic about just how much optimism can do.
NASA Astrophysics Data System (ADS)
Spagnolie, Saverio E.; Lauga, Eric
2010-03-01
Motile eukaryotic cells propel themselves in viscous fluids by passing waves of bending deformation down their flagella. An infinitely long flagellum achieves a hydrodynamically optimal low-Reynolds number locomotion when the angle between its local tangent and the swimming direction remains constant along its length. Optimal flagella therefore adopt the shape of a helix in three dimensions (smooth) and that of a sawtooth in two dimensions (nonsmooth). Physically, biological organisms (or engineered microswimmers) must expend internal energy in order to produce the waves of deformation responsible for the motion. Here we propose a physically motivated derivation of the optimal flagellum shape. We determine analytically and numerically the shape of the flagellar wave which leads to the fastest swimming for a given appropriately defined energetic expenditure. Our novel approach is to define an energy which includes not only the work against the surrounding fluid, but also (1) the energy stored elastically in the bending of the flagellum, (2) the energy stored elastically in the internal sliding of the polymeric filaments which are responsible for the generation of the bending waves (microtubules), and (3) the viscous dissipation due to the presence of an internal fluid. This approach regularizes the optimal sawtooth shape for two-dimensional deformation at the expense of a small loss in hydrodynamic efficiency. The optimal waveforms of finite-size flagella are shown to depend on a competition between rotational motions and bending costs, and we observe a surprising bias toward half-integer wave numbers. Their final hydrodynamic efficiencies are above 6%, significantly larger than those of swimming cells, therefore indicating available room for further biological tuning.
An optimal structural design algorithm using optimality criteria
NASA Technical Reports Server (NTRS)
Taylor, J. E.; Rossow, M. P.
1976-01-01
An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.
Optimization of combinatorial mutagenesis.
Parker, Andrew S; Griswold, Karl E; Bailey-Kellogg, Chris
2011-11-01
Protein engineering by combinatorial site-directed mutagenesis evaluates a portion of the sequence space near a target protein, seeking variants with improved properties (e.g., stability, activity, immunogenicity). In order to improve the hit-rate of beneficial variants in such mutagenesis libraries, we develop methods to select optimal positions and corresponding sets of the mutations that will be used, in all combinations, in constructing a library for experimental evaluation. Our approach, OCoM (Optimization of Combinatorial Mutagenesis), encompasses both degenerate oligonucleotides and specified point mutations, and can be directed accordingly by requirements of experimental cost and library size. It evaluates the quality of the resulting library by one- and two-body sequence potentials, averaged over the variants. To ensure that it is not simply recapitulating extant sequences, it balances the quality of a library with an explicit evaluation of the novelty of its members. We show that, despite dealing with a combinatorial set of variants, in our approach the resulting library optimization problem is actually isomorphic to single-variant optimization. By the same token, this means that the two-body sequence potential results in an NP-hard optimization problem. We present an efficient dynamic programming algorithm for the one-body case and a practically-efficient integer programming approach for the general two-body case. We demonstrate the effectiveness of our approach in designing libraries for three different case study proteins targeted by previous combinatorial libraries--a green fluorescent protein, a cytochrome P450, and a beta lactamase. We found that OCoM worked quite efficiently in practice, requiring only 1 hour even for the massive design problem of selecting 18 mutations to generate 10⁷ variants of a 443-residue P450. We demonstrate the general ability of OCoM in enabling the protein engineer to explore and evaluate trade-offs between quality and
Optimal Electric Utility Expansion
1989-10-10
SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansionmore » configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.« less
Cyclone performance and optimization
Leith, D.
1990-06-15
The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. During the past quarter, we have nearly completed modeling work that employs the flow field measurements made during the past six months. In addition, we have begun final work using the results of this project to develop improved design methods for cyclones. This work involves optimization using the Iozia-Leith efficiency model and the Dirgo pressure drop model. This work will be completed this summer. 9 figs.
NEMO Oceanic Model Optimization
NASA Astrophysics Data System (ADS)
Epicoco, I.; Mocavero, S.; Murli, A.; Aloisio, G.
2012-04-01
NEMO is an oceanic model used by the climate community for stand-alone or coupled experiments. Its parallel implementation, based on MPI, limits the exploitation of the emerging computational infrastructures at peta and exascale, due to the weight of communications. As case study we considered the MFS configuration developed at INGV with a resolution of 1/16° tailored on the Mediterranenan Basin. The work is focused on the analysis of the code on the MareNostrum cluster and on the optimization of critical routines. The first performance analysis of the model aimed at establishing how much the computational performance are influenced by the GPFS file system or the local disks and wich is the best domain decomposition. The results highlight that the exploitation of local disks can reduce the wall clock time up to 40% and that the best performance is achieved with a 2D decomposition when the local domain has a square shape. A deeper performance analysis highlights the obc_rad, dyn_spg and tra_adv routines are the most time consuming routines. The obc_rad implements the evaluation of the open boundaries and it has been the first routine to be optimized. The communication pattern implemented in obc_rad routine has been redesigned. Before the introduction of the optimizations all processes were involved in the communication, but only the processes on the boundaries have the actual data to be exchanged and only the data on the boundaries must be exchanged. Moreover the data along the vertical levels are "packed" and sent with only one MPI_send invocation. The overall efficiency increases compared with the original version, as well as the parallel speed-up. The execution time was reduced of about 33.81%. The second phase of optimization involved the SOR solver routine, implementing the Red-Black Successive-Over-Relaxation method. The high frequency of exchanging data among processes represent the most part of the overall communication time. The number of communication is
Córdova, Natalia; Yee, Debbie; Barto, Andrew G.; Niv, Yael; Botvinick, Matthew M.
2014-01-01
Human behavior has long been recognized to display hierarchical structure: actions fit together into subtasks, which cohere into extended goal-directed activities. Arranging actions hierarchically has well established benefits, allowing behaviors to be represented efficiently by the brain, and allowing solutions to new tasks to be discovered easily. However, these payoffs depend on the particular way in which actions are organized into a hierarchy, the specific way in which tasks are carved up into subtasks. We provide a mathematical account for what makes some hierarchies better than others, an account that allows an optimal hierarchy to be identified for any set of tasks. We then present results from four behavioral experiments, suggesting that human learners spontaneously discover optimal action hierarchies. PMID:25122479
Optimizing Thomson's jumping ring
NASA Astrophysics Data System (ADS)
Tjossem, Paul J. H.; Brost, Elizabeth C.
2011-04-01
The height to which rings will jump in a Thomson jumping ring apparatus is the central question posed by this popular lecture demonstration. We develop a simple time-averaged inductive-phase-lag model for the dependence of the jump height on the ring material, its mass, and temperature and apply it to measurements of the jump height for a set of rings made by slicing copper and aluminum alloy pipe into varying lengths. The data confirm a peak jump height that grows, narrows, and shifts to smaller optimal mass when the rings are cooled to 77 K. The model explains the ratio of the cooled/warm jump heights for a given ring, the reduction in optimal mass as the ring is cooled, and the shape of the mass resonance. The ring that jumps the highest is found to have a characteristic resistance equal to the inductive reactance of the set of rings.
Heliostat cost optimization study
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus
2016-05-01
This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.
Combinatorial optimization games
Deng, X.; Ibaraki, Toshihide; Nagamochi, Hiroshi
1997-06-01
We introduce a general integer programming formulation for a class of combinatorial optimization games, which immediately allows us to improve the algorithmic result for finding amputations in the core (an important solution concept in cooperative game theory) of the network flow game on simple networks by Kalai and Zemel. An interesting result is a general theorem that the core for this class of games is nonempty if and only if a related linear program has an integer optimal solution. We study the properties for this mathematical condition to hold for several interesting problems, and apply them to resolve algorithmic and complexity issues for their cores along the line as put forward in: decide whether the core is empty; if the core is empty, find an imputation in the core; given an imputation x, test whether x is in the core. We also explore the properties of totally balanced games in this succinct formulation of cooperative games.
Goldman, A. J.
2006-01-01
Dr. Christoph Witzgall, the honoree of this Symposium, can count among his many contributions to applied mathematics and mathematical operations research a body of widely-recognized work on the optimal location of facilities. The present paper offers to non-specialists a sketch of that field and its evolution, with emphasis on areas most closely related to Witzgall’s research at NBS/NIST. PMID:27274920
Hydraulic fracture design optimization
Lee, Tae-Soo; Advani, S.H.
1992-01-01
This research and development investigation, sponsored by US DOE and the oil and gas industry, extends previously developed hydraulic fracture geometry models and applied energy related characteristic time concepts towards the optimal design and control of hydraulic fracture geometries. The primary objective of this program is to develop rational criteria, by examining the associated energy rate components during the hydraulic fracture evolution, for the formulation of stimulation treatment design along with real-time fracture configuration interpretation and control.
Hydraulic fracture design optimization
Lee, Tae-Soo; Advani, S.H.
1992-06-01
This research and development investigation, sponsored by US DOE and the oil and gas industry, extends previously developed hydraulic fracture geometry models and applied energy related characteristic time concepts towards the optimal design and control of hydraulic fracture geometries. The primary objective of this program is to develop rational criteria, by examining the associated energy rate components during the hydraulic fracture evolution, for the formulation of stimulation treatment design along with real-time fracture configuration interpretation and control.
Trajectory Optimization: OTIS 4
NASA Technical Reports Server (NTRS)
Riehl, John P.; Sjauw, Waldy K.; Falck, Robert D.; Paris, Stephen W.
2010-01-01
The latest release of the Optimal Trajectories by Implicit Simulation (OTIS4) allows users to simulate and optimize aerospace vehicle trajectories. With OTIS4, one can seamlessly generate optimal trajectories and parametric vehicle designs simultaneously. New features also allow OTIS4 to solve non-aerospace continuous time optimal control problems. The inputs and outputs of OTIS4 have been updated extensively from previous versions. Inputs now make use of objectoriented constructs, including one called a metastring. Metastrings use a greatly improved calculator and common nomenclature to reduce the user s workload. They allow for more flexibility in specifying vehicle physical models, boundary conditions, and path constraints. The OTIS4 calculator supports common mathematical functions, Boolean operations, and conditional statements. This allows users to define their own variables for use as outputs, constraints, or objective functions. The user-defined outputs can directly interface with other programs, such as spreadsheets, plotting packages, and visualization programs. Internally, OTIS4 has more explicit and implicit integration procedures, including high-order collocation methods, the pseudo-spectral method, and several variations of multiple shooting. Users may switch easily between the various methods. Several unique numerical techniques such as automated variable scaling and implicit integration grid refinement, support the integration methods. OTIS4 is also significantly more user friendly than previous versions. The installation process is nearly identical on various platforms, including Microsoft Windows, Apple OS X, and Linux operating systems. Cross-platform scripts also help make the execution of OTIS and post-processing of data easier. OTIS4 is supplied free by NASA and is subject to ITAR (International Traffic in Arms Regulations) restrictions. Users must have a Fortran compiler, and a Python interpreter is highly recommended.
Optimal Centroid Position Estimation
Candy, J V; McClay, W A; Awwal, A S; Ferguson, S W
2004-07-23
The alignment of high energy laser beams for potential fusion experiments demand high precision and accuracy by the underlying positioning algorithms. This paper discusses the feasibility of employing online optimal position estimators in the form of model-based processors to achieve the desired results. Here we discuss the modeling, development, implementation and processing of model-based processors applied to both simulated and actual beam line data.
Optimizing parallel reduction operations
Denton, S.M.
1995-06-01
A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.
Flood Bypass Capacity Optimization
NASA Astrophysics Data System (ADS)
Siclari, A.; Hui, R.; Lund, J. R.
2015-12-01
Large river flows can damage adjacent flood-prone areas, by exceeding river channel and levee capacities. Particularly large floods are difficult to contain in leveed river banks alone. Flood bypasses often can efficiently reduce flood risks, where excess river flow is diverted over a weir to bypasses, that incur much less damage and cost. Additional benefits of bypasses include ecosystem protection, agriculture, groundwater recharge and recreation. Constructing or expanding an existing bypass costs in land purchase easements, and levee setbacks. Accounting for such benefits and costs, this study develops a simple mathematical model for optimizing flood bypass capacity using benefit-cost and risk analysis. Application to the Yolo Bypass, an existing bypass along the Sacramento River in California, estimates optimal capacity that economically reduces flood damage and increases various benefits, especially for agriculture. Land availability is likely to limit bypass expansion. Compensation for landowners could relax such limitations. Other economic values could affect the optimal results, which are shown by sensitivity analysis on major parameters. By including land geography into the model, location of promising capacity expansions can be identified.
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
NASA Astrophysics Data System (ADS)
Wymant, Chris
2012-12-01
In supersymmetric models, a large average stop mass MS is well known to both boost the lightest Higgs boson mass mh and also make radiative electroweak symmetry breaking unnaturally tuned. The case of “maximal mixing,” where the stop trilinear mixing term At is set to give At2/MS2=6, allows the stops to be as light as possible for a given mh. Here we make the distinction between minimal MS and optimal naturalness, showing that the latter occurs for less-than-maximal mixing. Lagrange-constrained optimization reveals that the two coincide closely in the Minimal Supersymmetric Standard Model (MSSM)—optimally we have 5
Structural optimization of framed structures using generalized optimality criteria
NASA Technical Reports Server (NTRS)
Kolonay, R. M.; Venkayya, Vipperla B.; Tischler, V. A.; Canfield, R. A.
1989-01-01
The application of a generalized optimality criteria to framed structures is presented. The optimality conditions, Lagrangian multipliers, resizing algorithm, and scaling procedures are all represented as a function of the objective and constraint functions along with their respective gradients. The optimization of two plane frames under multiple loading conditions subject to stress, displacement, generalized stiffness, and side constraints is presented. These results are compared to those found by optimizing the frames using a nonlinear mathematical programming technique.
Structural optimization of large structural systems by optimality criteria methods
NASA Technical Reports Server (NTRS)
Berke, Laszlo
1992-01-01
The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.
Optimal Temporal Risk Assessment
Balci, Fuat; Freestone, David; Simen, Patrick; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
Time is an essential feature of most decisions, because the reward earned from decisions frequently depends on the temporal statistics of the environment (e.g., on whether decisions must be made under deadlines). Accordingly, evolution appears to have favored a mechanism that predicts intervals in the seconds to minutes range with high accuracy on average, but significant variability from trial to trial. Importantly, the subjective sense of time that results is sufficiently imprecise that maximizing rewards in decision-making can require substantial behavioral adjustments (e.g., accumulating less evidence for a decision in order to beat a deadline). Reward maximization in many daily decisions therefore requires optimal temporal risk assessment. Here, we review the temporal decision-making literature, conduct secondary analyses of relevant published datasets, and analyze the results of a new experiment. The paper is organized in three parts. In the first part, we review literature and analyze existing data suggesting that animals take account of their inherent behavioral variability (their “endogenous timing uncertainty”) in temporal decision-making. In the second part, we review literature that quantitatively demonstrates nearly optimal temporal risk assessment with sub-second and supra-second intervals using perceptual tasks (with humans and mice) and motor timing tasks (with humans). We supplement this section with original research that tested human and rat performance on a task that requires finding the optimal balance between two time-dependent quantities for reward maximization. This optimal balance in turn depends on the level of timing uncertainty. Corroborating the reviewed literature, humans and rats exhibited nearly optimal temporal risk assessment in this task. In the third section, we discuss the role of timing uncertainty in reward maximization in two-choice perceptual decision-making tasks and review literature that implicates timing uncertainty
Ames Optimized TCA Configuration
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Reuther, James J.; Hicks, Raymond M.
1999-01-01
Configuration design at Ames was carried out with the SYN87-SB (single block) Euler code using a 193 x 49 x 65 C-H grid. The Euler solver is coupled to the constrained (NPSOL) and the unconstrained (QNMDIF) optimization packages. Since the single block grid is able to model only wing-body configurations, the nacelle/diverter effects were included in the optimization process by SYN87's option to superimpose the nacelle/diverter interference pressures on the wing. These interference pressures were calculated using the AIRPLANE code. AIRPLANE is an Euler solver that uses a unstructured tetrahedral mesh and is capable of computations about arbitrary complete configurations. In addition, the buoyancy effects of the nacelle/diverters were also included in the design process by imposing the pressure field obtained during the design process onto the triangulated surfaces of the nacelle/diverter mesh generated by AIRPLANE. The interference pressures and nacelle buoyancy effects are added to the final forces after each flow field calculation. Full details of the (recently enhanced) ghost nacelle capability are given in a related talk. The pseudo nacelle corrections were greatly improved during this design cycle. During the Ref H and Cycle 1 design activities, the nacelles were only translated and pitched. In the cycle 2 design effort the nacelles can translate vertically, and pitch to accommodate the changes in the lower surface geometry. The diverter heights (between their leading and trailing edges) were modified during design as the shape of the lower wing changed, with the drag of the diverter changing accordingly. Both adjoint and finite difference gradients were used during optimization. The adjoint-based gradients were found to give good direction in the design space for configurations near the starting point, but as the design approached a minimum, the finite difference gradients were found to be more accurate. Use of finite difference gradients was limited by the
Multiobjective optimization of temporal processes.
Song, Zhe; Kusiak, Andrew
2010-06-01
This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework. PMID:19900853
Combinatorial optimization in foundry practice
NASA Astrophysics Data System (ADS)
Antamoshkin, A. N.; Masich, I. S.
2016-04-01
The multicriteria mathematical model of foundry production capacity planning is suggested in the paper. The model is produced in terms of pseudo-Boolean optimization theory. Different search optimization methods were used to solve the obtained problem.
Taking Stock of Unrealistic Optimism
Shepperd, James A.; Klein, William M. P.; Waters, Erika A.; Weinstein, Neil D.
2015-01-01
Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type—the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention. PMID:26045714
ERIC Educational Resources Information Center
Reivich, Karen
2010-01-01
Dictionary definitions of optimism encompass two related concepts. The first of these is a hopeful disposition or a conviction that good will ultimately prevail. The second, broader conception of optimism refers to the belief, or the inclination to believe, that the world is the best of all possible worlds. In psychological research, optimism has…
Optimal Test Construction. Research Report.
ERIC Educational Resources Information Center
Veldkamp, Bernard P.
This paper discusses optimal test construction, which deals with the selection of items from a pool to construct a test that performs optimally with respect to the objective of the test and simultaneously meets all test specifications. Optimal test construction problems can be formulated as mathematical decision models. Algorithms and heuristics…
Metacognitive Control and Optimal Learning
ERIC Educational Resources Information Center
Son, Lisa K.; Sethi, Rajiv
2006-01-01
The notion of optimality is often invoked informally in the literature on metacognitive control. We provide a precise formulation of the optimization problem and show that optimal time allocation strategies depend critically on certain characteristics of the learning environment, such as the extent of time pressure, and the nature of the uptake…
A Primer on Unrealistic Optimism
Shepperd, James A.; Waters, Erika; Weinstein, Neil D.; Klein, William M. P.
2014-01-01
People display unrealistic optimism in their predictions for countless events, believing that their personal future outcomes will be more desirable than can possibly be true. We summarize the vast literature on unrealistic optimism by focusing on four broad questions: What is unrealistic optimism; when does it occur; why does it occur; and what are its consequences. PMID:26089606
Optimality criteria: A basis for multidisciplinary design optimization
NASA Astrophysics Data System (ADS)
Venkayya, V. B.
1989-01-01
This paper presents a generalization of what is frequently referred to in the literature as the optimality criteria approach in structural optimization. This generalization includes a unified presentation of the optimality conditions, the Lagrangian multipliers, and the resizing and scaling algorithms in terms of the sensitivity derivatives of the constraint and objective functions. The by-product of this generalization is the derivation of a set of simple nondimensional parameters which provides significant insight into the behavior of the structure as well as the optimization algorithm. A number of important issues, such as, active and passive variables, constraints and three types of linking are discussed in the context of the present derivation of the optimality criteria approach. The formulation as presented in this paper brings multidisciplinary optimization within the purview of this extremely efficient optimality criteria approach.
Multicriteria VMAT optimization
Craft, David; McQuaid, Dualta; Wala, Jeremiah; Chen, Wei; Salari, Ehsan; Bortfeld, Thomas
2012-02-15
Purpose: To make the planning of volumetric modulated arc therapy (VMAT) faster and to explore the tradeoffs between planning objectives and delivery efficiency. Methods: A convex multicriteria dose optimization problem is solved for an angular grid of 180 equi-spaced beams. This allows the planner to navigate the ideal dose distribution Pareto surface and select a plan of desired target coverage versus organ at risk sparing. The selected plan is then made VMAT deliverable by a fluence map merging and sequencing algorithm, which combines neighboring fluence maps based on a similarity score and then delivers the merged maps together, simplifying delivery. Successive merges are made as long as the dose distribution quality is maintained. The complete algorithm is called VMERGE. Results: VMERGE is applied to three cases: a prostate, a pancreas, and a brain. In each case, the selected Pareto-optimal plan is matched almost exactly with the VMAT merging routine, resulting in a high quality plan delivered with a single arc in less than 5 min on average. Conclusions: VMERGE offers significant improvements over existing VMAT algorithms. The first is the multicriteria planning aspect, which greatly speeds up planning time and allows the user to select the plan, which represents the most desirable compromise between target coverage and organ at risk sparing. The second is the user-chosen epsilon-optimality guarantee of the final VMAT plan. Finally, the user can explore the tradeoff between delivery time and plan quality, which is a fundamental aspect of VMAT that cannot be easily investigated with current commercial planning systems.
Optimizing your reception area.
Lachter, Jesse; Raldow, Ann; Molin, Niki
2012-01-01
Through the optimization of reception areas (waiting rooms), physicians can improve the medical experiences of their patients. A qualitative investigation identified issues relevant to improving the quality of the reception area and was used to develop a thorough questionnaire. Most patients were satisfied with accessibility, reception area conditions, and performance of doctors and nurses. The main reasons for dissatisfaction were due to remediable points. No correlations were found between patient satisfaction and age, sex, or religion. A 36-item checklist for satisfaction with reception areas is offered as a useful tool for health quality self-assessment.
Constructing optimal entanglement witnesses
NASA Astrophysics Data System (ADS)
Chruściński, Dariusz; Pytel, Justyna; Sarbicki, Gniewomir
2009-12-01
We provide a class of indecomposable entanglement witnesses. In 4×4 case, it reproduces the well-known Breuer-Hall witness. We prove that these witnesses are optimal and atomic, i.e., they are able to detect the “weakest” quantum entanglement encoded into states with positive partial transposition. Equivalently, we provide a construction of indecomposable atomic maps in the algebra of 2k×2k complex matrices. It is shown that their structural physical approximations give rise to entanglement breaking channels. This result supports recent conjecture by Korbicz [Phys. Rev. A 78, 062105 (2008)].
Optimization of radiation protection
Lochard, J.
1981-07-01
The practical and theoretical problems raised by the optimization of radiological protection merit a review of decision-making methods, their relevance, and the way in which they are used in order to better determine what role they should play in the decision-making process. Following a brief summary of the theoretical background of the cost-benefit analysis, we examine the methodological choices implicit in the model presented in the International Commission on Radiological Protection Publication No. 26 and, particularly, the consequences of the theory that the level of radiation protection, the benefits, and the production costs of an activity can be treated separately.
Constructing optimal entanglement witnesses
Chruscinski, Dariusz; Pytel, Justyna; Sarbicki, Gniewomir
2009-12-15
We provide a class of indecomposable entanglement witnesses. In 4x4 case, it reproduces the well-known Breuer-Hall witness. We prove that these witnesses are optimal and atomic, i.e., they are able to detect the 'weakest' quantum entanglement encoded into states with positive partial transposition. Equivalently, we provide a construction of indecomposable atomic maps in the algebra of 2kx2k complex matrices. It is shown that their structural physical approximations give rise to entanglement breaking channels. This result supports recent conjecture by Korbicz et al. [Phys. Rev. A 78, 062105 (2008)].
Design Optimization Toolkit: Users' Manual
Aguilo Valentin, Miguel Alejandro
2014-07-01
The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLAB command window.
An Improved Cockroach Swarm Optimization
Obagbuwa, I. C.; Adewumi, A. O.
2014-01-01
Hunger component is introduced to the existing cockroach swarm optimization (CSO) algorithm to improve its searching ability and population diversity. The original CSO was modelled with three components: chase-swarming, dispersion, and ruthless; additional hunger component which is modelled using partial differential equation (PDE) method is included in this paper. An improved cockroach swarm optimization (ICSO) is proposed in this paper. The performance of the proposed algorithm is tested on well known benchmarks and compared with the existing CSO, modified cockroach swarm optimization (MCSO), roach infestation optimization RIO, and hungry roach infestation optimization (HRIO). The comparison results show clearly that the proposed algorithm outperforms the existing algorithms. PMID:24959611
Synthesizing optimal waste blends
Narayan, V.; Diwekar, W.M.; Hoza, M.
1996-10-01
Vitrification of tank wastes to form glass is a technique that will be used for the disposal of high-level waste at Hanford. Process and storage economics show that minimizing the total number of glass logs produced is the key to keeping cost as low as possible. The amount of glass produced can be reduced by blending of the wastes. The optimal way to combine the tanks to minimize the vole of glass can be determined from a discrete blend calculation. However, this problem results in a combinatorial explosion as the number of tanks increases. Moreover, the property constraints make this problem highly nonconvex where many algorithms get trapped in local minima. In this paper the authors examine the use of different combinatorial optimization approaches to solve this problem. A two-stage approach using a combination of simulated annealing and nonlinear programming (NLP) is developed. The results of different methods such as the heuristics approach based on human knowledge and judgment, the mixed integer nonlinear programming (MINLP) approach with GAMS, and branch and bound with lower bound derived from the structure of the given blending problem are compared with this coupled simulated annealing and NLP approach.
Diffusion with optimal resetting
NASA Astrophysics Data System (ADS)
Evans, Martin R.; Majumdar, Satya N.
2011-10-01
We consider the mean time to absorption by an absorbing target of a diffusive particle with the addition of a process whereby the particle is reset to its initial position with rate r. We consider several generalizations of the model of Evans and Majumdar (2011 Phys. Rev. Lett.106 160601): (i) a space-dependent resetting rate r(x); (ii) resetting to a random position z drawn from a resetting distribution { P}(z); and (iii) a spatial distribution for the absorbing target PT(x). As an example of (i) we show that the introduction of a non-resetting window around the initial position can reduce the mean time to absorption provided that the initial position is sufficiently far from the target. We address the problem of optimal resetting, that is, minimizing the mean time to absorption for a given target distribution. For an exponentially decaying target distribution centred at the origin we show that a transition in the optimal resetting distribution occurs as the target distribution narrows.
Bower, Stanley
2011-12-31
A 5.0L V8 twin-turbocharged direct injection engine was designed, built, and tested for the purpose of assessing the fuel economy and performance in the F-Series pickup of the Dual Fuel engine concept and of an E85 optimized FFV engine. Additionally, production 3.5L gasoline turbocharged direct injection (GTDI) EcoBoost engines were converted to Dual Fuel capability and used to evaluate the cold start emissions and fuel system robustness of the Dual Fuel engine concept. Project objectives were: to develop a roadmap to demonstrate a minimized fuel economy penalty for an F-Series FFV truck with a highly boosted, high compression ratio spark ignition engine optimized to run with ethanol fuel blends up to E85; to reduce FTP 75 energy consumption by 15% - 20% compared to an equally powered vehicle with a current production gasoline engine; and to meet ULEV emissions, with a stretch target of ULEV II / Tier II Bin 4. All project objectives were met or exceeded.
MAGEE,GLEN I.
2000-08-03
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.
Optimal Synchronizability of Bearings
NASA Astrophysics Data System (ADS)
Araújo, N. A. M.; Seybold, H.; Baram, R. M.; Herrmann, H. J.; Andrade, J. S., Jr.
2013-02-01
Bearings are mechanical dissipative systems that, when perturbed, relax toward a synchronized (bearing) state. Here we find that bearings can be perceived as physical realizations of complex networks of oscillators with asymmetrically weighted couplings. Accordingly, these networks can exhibit optimal synchronization properties through fine-tuning of the local interaction strength as a function of node degree [Motter, Zhou, and Kurths, Phys. Rev. E 71, 016116 (2005)PLEEE81539-3755]. We show that, in analogy, the synchronizability of bearings can be maximized by counterbalancing the number of contacts and the inertia of their constituting rotor disks through the mass-radius relation, m˜rα, with an optimal exponent α=α× which converges to unity for a large number of rotors. Under this condition, and regardless of the presence of a long-tailed distribution of disk radii composing the mechanical system, the average participation per disk is maximized and the energy dissipation rate is homogeneously distributed among elementary rotors.
Polynomial optimization techniques for activity scheduling. Optimization based prototype scheduler
NASA Technical Reports Server (NTRS)
Reddy, Surender
1991-01-01
Polynomial optimization techniques for activity scheduling (optimization based prototype scheduler) are presented in the form of the viewgraphs. The following subject areas are covered: agenda; need and viability of polynomial time techniques for SNC (Space Network Control); an intrinsic characteristic of SN scheduling problem; expected characteristics of the schedule; optimization based scheduling approach; single resource algorithms; decomposition of multiple resource problems; prototype capabilities, characteristics, and test results; computational characteristics; some features of prototyped algorithms; and some related GSFC references.
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
Richard, Morgiane; Fryett, Matthew; Miller, Samantha; Booth, Ian; Grebogi, Celso; Moura, Alessandro
2012-01-01
DNA within cells is subject to damage from various sources. Organisms have evolved a number of mechanisms to repair DNA damage. The activity of repair enzymes carries its own risk, however, because the repair of two nearby lesions may lead to the breakup of DNA and result in cell death. We propose a mathematical theory of the damage and repair process in the important scenario where lesions are caused in bursts. We use this model to show that there is an optimum level of repair enzymes within cells which optimises the cell's response to damage. This optimal level is explained as the best trade-off between fast repair and a low probability of causing double-stranded breaks. We derive our results analytically and test them using stochastic simulations, and compare our predictions with current biological knowledge. PMID:21945337
Optimality in Data Assimilation
NASA Astrophysics Data System (ADS)
Nearing, Grey; Yatheendradas, Soni
2016-04-01
It costs a lot more to develop and launch an earth-observing satellite than it does to build a data assimilation system. As such, we propose that it is important to understand the efficiency of our assimilation algorithms at extracting information from remote sensing retrievals. To address this, we propose that it is necessary to adopt completely general definition of "optimality" that explicitly acknowledges all differences between the parametric constraints of our assimilation algorithm (e.g., Gaussianity, partial linearity, Markovian updates) and the true nature of the environmetnal system and observing system. In fact, it is not only possible, but incredibly straightforward, to measure the optimality (in this more general sense) of any data assimilation algorithm as applied to any intended model or natural system. We measure the information content of remote sensing data conditional on the fact that we are already running a model and then measure the actual information extracted by data assimilation. The ratio of the two is an efficiency metric, and optimality is defined as occurring when the data assimilation algorithm is perfectly efficient at extracting information from the retrievals. We measure the information content of the remote sensing data in a way that, unlike triple collocation, does not rely on any a priori presumed relationship (e.g., linear) between the retrieval and the ground truth, however, like triple-collocation, is insensitive to the spatial mismatch between point-based measurements and grid-scale retrievals. This theory and method is therefore suitable for use with both dense and sparse validation networks. Additionally, the method we propose is *constructive* in the sense that it provides guidance on how to improve data assimilation systems. All data assimilation strategies can be reduced to approximations of Bayes' law, and we measure the fractions of total information loss that are due to individual assumptions or approximations in the
Cyclone performance and optimization
Leith, D.
1989-06-15
The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. We have now received all the equipment necessary for the flow visualization studies described over the last two progress reports. We have begun more detailed studies of the gas flow pattern within cyclones as detailed below. Third, we have begun studies of the effect of particle concentration on cyclone performance. This work is critical to application of our results to commercial operations. 1 fig.
DENSE MEDIA CYCLONE OPTIMIZATION
Gerald H. Luttrell
2002-01-14
During the past quarter, float-sink analyses were completed for four of seven circuits evaluated in this project. According to the commercial laboratory, the analyses for the remaining three sites will be finished by mid February 2002. In addition, it was necessary to repeat several of the float-sink tests to resolve problems identified during the analysis of the experimental data. In terms of accomplishments, a website is being prepared to distribute project findings and software to the public. This site will include (i) an operators manual for HMC operation and maintenance (already available in hard copy), (ii) an expert system software package for evaluating and optimizing HMC performance (in development), and (iii) a spreadsheet-based process model for plant designers (in development). Several technology transfer activities were also carried out including the publication of project results in proceedings and the training of plant operations via workshops.
Desalination Plant Optimization
Wilson, J. V.
1992-10-01
MSF21 and VTE21 perform design and costing calculations for multistage flash evaporator (MSF) and multieffect vertical tube evaporator (VTE) desalination plants. An optimization capability is available, if desired. The MSF plant consists of a recovery section, reject section, brine heater, and associated buildings and equipment. Operating costs and direct and indirect capital costs for plant, buildings, site, and intakes are calculated. Computations are based on the first and last stages of each section and a typical middle recovery stage. As a result, the program runs rapidly but does not give stage by stage parameters. The VTE plant consists of vertical tube effects, multistage flash preheater, condenser, and brine heater and associated buildings and equipment. Design computations are done for each vertical tube effect, but preheater computations are based on the first and last stages and a typical middle stage.
Optimized nanoporous materials.
Braun, Paul V.; Langham, Mary Elizabeth; Jacobs, Benjamin W.; Ong, Markus D.; Narayan, Roger J.; Pierson, Bonnie E.; Gittard, Shaun D.; Robinson, David B.; Ham, Sung-Kyoung; Chae, Weon-Sik; Gough, Dara V.; Wu, Chung-An Max; Ha, Cindy M.; Tran, Kim L.
2009-09-01
Nanoporous materials have maximum practical surface areas for electrical charge storage; every point in an electrode is within a few atoms of an interface at which charge can be stored. Metal-electrolyte interfaces make best use of surface area in porous materials. However, ion transport through long, narrow pores is slow. We seek to understand and optimize the tradeoff between capacity and transport. Modeling and measurements of nanoporous gold electrodes has allowed us to determine design principles, including the fact that these materials can deplete salt from the electrolyte, increasing resistance. We have developed fabrication techniques to demonstrate architectures inspired by these principles that may overcome identified obstacles. A key concept is that electrodes should be as close together as possible; this is likely to involve an interpenetrating pore structure. However, this may prove extremely challenging to fabricate at the finest scales; a hierarchically porous structure can be a worthy compromise.
DENSE MEDIA CYCLONE OPTIMIZATION
Gerald H. Luttrell
2002-04-11
The test data obtained from the Baseline Assessment that compares the performance of the density traces to that of different sizes of coal particles is now complete. The experimental results show that the tracer data can indeed be used to accurately predict HMC performance. The following conclusions were drawn: (i) the tracer curve is slightly sharper than curve for coarsest size fraction of coal (probably due to the greater resolution of the tracer technique), (ii) the Ep increases with decreasing coal particle size, and (iii) the Ep values are not excessively large for the well-maintained HMC circuits. The major problems discovered were associated with improper apex-to-vortex finder ratios and particle hang-up due to media segregation. Only one plant yielded test data that were typical of a fully optimized level of performance.
Desalination Plant Optimization
1992-10-01
MSF21 and VTE21 perform design and costing calculations for multistage flash evaporator (MSF) and multieffect vertical tube evaporator (VTE) desalination plants. An optimization capability is available, if desired. The MSF plant consists of a recovery section, reject section, brine heater, and associated buildings and equipment. Operating costs and direct and indirect capital costs for plant, buildings, site, and intakes are calculated. Computations are based on the first and last stages of each section and amore » typical middle recovery stage. As a result, the program runs rapidly but does not give stage by stage parameters. The VTE plant consists of vertical tube effects, multistage flash preheater, condenser, and brine heater and associated buildings and equipment. Design computations are done for each vertical tube effect, but preheater computations are based on the first and last stages and a typical middle stage.« less
Gogol, Manfred
2015-08-01
Stress is a stimulus or incident which has an exogenic or endogenic influence on an organism and leads to a biological and/or psychological adaptation from the organism by adaptation. Stressors can be differentiated by the temporal impact (e.g. acute, chronic or acute on chronic), strength and quality. The consequences of stress exposure and adaptation can be measured at the cellular level and as (sub) clinical manifestations, where this process can be biologically seen as a continuum. Over the course of life there is an accumulation of stress incidents resulting in a diminution of the capability for adaptation and repair mechanisms. By means of various interventions it is possible to improve the individual capability for adaptation but it is not currently definitively possible to disentangle alterations due to ageing and the development of diseases. As a consequence the term "healthy ageing" should be replaced by the concept of "optimal ageing". PMID:26208575
Optimal Foraging by Zooplankton
NASA Astrophysics Data System (ADS)
Garcia, Ricardo; Moss, Frank
2007-03-01
We describe experiments with several species of the zooplankton, Daphnia, while foraging for food. They move in sequences: hop-pause-turn-hop etc. While we have recorded hop lengths, hop times, pause times and turning angles, our focus is on histograms representing the distributions of the turning angles. We find that different species, including adults and juveniles, move with similar turning angle distributions described by exponential functions. Random walk simulations and a theory based on active Brownian particles indicate a maximum in food gathering efficiency at an optimal width of the turning angle distribution. Foraging takes place within a fixed size food patch during a fixed time. We hypothesize that the exponential distributions were selected for survival over evolutionary time scales.
NASA Astrophysics Data System (ADS)
Rebilas, Krzysztof
2013-02-01
Consider a skier who goes down a takeoff ramp, attains a speed V, and jumps, attempting to land as far as possible down the hill below (Fig. 1). At the moment of takeoff the angle between the skier's velocity and the horizontal is α. What is the optimal angle α that makes the jump the longest possible for the fixed magnitude of the velocity V? Of course, in practice, this is a very sophisticated problem; the skier's range depends on a variety of complex factors in addition to V and α. However, if we ignore these and assume the jumper is in free fall between the takeoff ramp and the landing point below, the problem becomes an exercise in kinematics that is suitable for introductory-level students. The solution is presented here.
Public optimism towards nanomedicine
Bottini, Massimo; Rosato, Nicola; Gloria, Fulvia; Adanti, Sara; Corradino, Nunziella; Bergamaschi, Antonio; Magrini, Andrea
2011-01-01
Background Previous benefit–risk perception studies and social experiences have clearly demonstrated that any emerging technology platform that ignores benefit–risk perception by citizens might jeopardize its public acceptability and further development. The aim of this survey was to investigate the Italian judgment on nanotechnology and which demographic and heuristic variables were most influential in shaping public perceptions of the benefits and risks of nanotechnology. Methods In this regard, we investigated the role of four demographic (age, gender, education, and religion) and one heuristic (knowledge) predisposing factors. Results The present study shows that gender, education, and knowledge (but not age and religion) influenced the Italian perception of how nanotechnology will (positively or negatively) affect some areas of everyday life in the next twenty years. Furthermore, the picture that emerged from our study is that Italian citizens, despite minimal familiarity with nanotechnology, showed optimism towards nanotechnology applications, especially those related to health and medicine (nanomedicine). The high regard for nanomedicine was tied to the perception of risks associated with environmental and societal implications (division among social classes and increased public expenses) rather than health issues. However, more highly educated people showed greater concern for health issues but this did not decrease their strong belief about the benefits that nanotechnology would bring to medical fields. Conclusion The results reported here suggest that public optimism towards nanomedicine appears to justify increased scientific effort and funding for medical applications of nanotechnology. It also obligates toxicologists, politicians, journalists, entrepreneurs, and policymakers to establish a more responsible dialog with citizens regarding the nature and implications of this emerging technology platform. PMID:22267931
Optimal packings of superballs
NASA Astrophysics Data System (ADS)
Jiao, Y.; Stillinger, F. H.; Torquato, S.
2009-04-01
Dense hard-particle packings are intimately related to the structure of low-temperature phases of matter and are useful models of heterogeneous materials and granular media. Most studies of the densest packings in three dimensions have considered spherical shapes, and it is only more recently that nonspherical shapes (e.g., ellipsoids) have been investigated. Superballs (whose shapes are defined by |x1|2p+|x2|2p+|x3|2p≤1 ) provide a versatile family of convex particles (p≥0.5) with both cubic-like and octahedral-like shapes as well as concave particles (0
optimal ones. The maximal packing density as a function of p is nonanalytic at the sphere point (p=1) and increases dramatically as p moves away from unity. Two more nontrivial nonanalytic behaviors occur at pc∗=1.1509… and po∗=ln3/ln4=0.7924… for “cubic” and “octahedral” superballs, respectively, where different Bravais lattice packings possess the same densities. The packing characteristics determined by the broken rotational symmetry of superballs are similar to but richer than their two-dimensional “superdisk” counterparts [Y. Jiao , Phys. Rev. Lett. 100, 245504 (2008)] and are distinctly different from that of ellipsoid packings. Our candidate optimal superball packings provide a starting point to quantify the equilibrium phase behavior of superball systems, which should deepen our understanding of the statistical thermodynamics of nonspherical-particle systems.
OPTIMAL NETWORK TOPOLOGY DESIGN
NASA Technical Reports Server (NTRS)
Yuen, J. H.
1994-01-01
This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem. PMID:24235281
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Four-body trajectory optimization
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1974-01-01
A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.
The Structural Optimization of Trees
NASA Astrophysics Data System (ADS)
Mattheck, C.; Bethge, K.
1998-01-01
Optimization methods are presented for engineering design based on the axiom of uniform stress. The principle of adaptive growth which biological structures use to minimize stress concentrations has been incorporated into a computer-aided optimization (CAO) procedure. Computer-aided optimization offers the advantage of three-dimensional optimization for the purpose of designing more fatigue-resistant components without mathematical sophistication. Another method, called computer-aided internal optimization (CAIO), optimizes the performance of fiber-composite materials by aligning the fiber distribution with the force flow, again mimicking the structure of trees. The lines of force flow, so-called principal stress trajectories, are not subject to shear stresses. Avoiding shear stresses in the technical components can lead to an increase in maximum load capacity. By the use of a new testing device strength distributions in trees can be determined and explained based on a new mechanical wood model.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both
GAPS IN SUPPORT VECTOR OPTIMIZATION
STEINWART, INGO; HUSH, DON; SCOVEL, CLINT; LIST, NICOLAS
2007-01-29
We show that the stopping criteria used in many support vector machine (SVM) algorithms working on the dual can be interpreted as primal optimality bounds which in turn are known to be important for the statistical analysis of SVMs. To this end we revisit the duality theory underlying the derivation of the dual and show that in many interesting cases primal optimality bounds are the same as known dual optimality bounds.
Optimal Reconfiguration of Tetrahedral Formations
NASA Technical Reports Server (NTRS)
Huntington, Geoffrey; Rao, Anil V.; Hughes, Steven P.
2004-01-01
The problem of minimum-fuel formation reconfiguration for the Magnetospheric Multi-Scale (MMS) mission is studied. This reconfiguration trajectory optimization problem can be posed as a nonlinear optimal control problem. In this research, this optimal control problem is solved using a spectral collocation method called the Gauss pseudospectral method. The objective of this research is to provide highly accurate minimum-fuel solutions to the MMS formation reconfiguration problem and to gain insight into the underlying structure of fuel-optimal trajectories.
Structural Optimization in automotive design
NASA Technical Reports Server (NTRS)
Bennett, J. A.; Botkin, M. E.
1984-01-01
Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.
Structural optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.
1983-01-01
A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.
Stochastic Optimization of Complex Systems
Birge, John R.
2014-03-20
This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in equilibrium models, improvement of Monte Carlo convergence through the use of a new method of sample batch optimization, the development of new parallel processing methods for stochastic unit commitment models, and the development of improved methods in combination with parallel processing for incorporating automatic differentiation methods into optimization.
NASA Astrophysics Data System (ADS)
Inanloo, B.
2011-12-01
The Caspian Sea is considered to be the largest inland body of water in the world, which located between the Caucasus Mountains and Central Asia. The Caspian Sea has been a source of the most contentious international conflicts between five littoral states now borders the sea: Azerbaijan, Iran, Kazakhstan, Russia, and Turkmenistan. The conflict over the legal status of this international body of water as an aftermath of the breakup of the Soviet Union in 1991. Since then the parties have been negotiating without coming up with any agreement neither on the ownerships of waters, nor the oil and natural gas beneath them. The number of involved stakeholders, the unusual characteristics of the Caspian Sea in considering it as a lake or a sea, and a large number of external parties are interested in the valuable resources of the Sea has made this conflict complex and unique. This paper intends to apply methods to find the best allocation schemes considering acceptability and stability of selected solution to share the Caspian Sea and its resources fairly and efficiently. Although, there are several allocation methods in solving such allocation problems, however, most of those seek a socially optimal solution that can satisfy majority of criteria or decision makers, while, in practice, especially in multi-nation problems, such solution may not be necessarily a stable solution and to be acceptable to all parties. Hence, there is need to apply a method that considers stability and acceptability of solutions to find a solution with high chance to be agreed upon that. Application of some distance-based methods in studying the Caspian Sea conflict provides some policy insights useful for finding solutions that can resolve the dispute. In this study, we use methods such as Goal Programming, Compromise Programming, and considering stability of solution the logic of Power Index is used to find a division rule that is stable negotiators. The results of this study shows that the
RNA based evolutionary optimization.
Schuster, P
1993-12-01
. Evolutionary optimization of two-letter sequences in thus more difficult than optimization in the world of natural RNA sequences with four bases. This fact might explain the usage of four bases in the genetic language of nature. Finally we study the mapping from RNA sequences into secondary structures and explore the topology of RNA shape space. We find that 'neutral paths' connecting neighbouring sequences with identical structures go very frequently through entire sequence space. Sequences folding into common structures are found everywhere in sequence space.(ABSTRACT TRUNCATED AT 400 WORDS)
Acoustic Radiation Optimization Using the Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
Jeon, Jin-Young; Okuma, Masaaki
The present paper describes a fundamental study on structural bending design to reduce noise using a new evolutionary population-based heuristic algorithm called the particle swarm optimization algorithm (PSOA). The particle swarm optimization algorithm is a parallel evolutionary computation technique proposed by Kennedy and Eberhart in 1995. This algorithm is based on the social behavior models for bird flocking, fish schooling and other models investigated by zoologists. Optimal structural design problems to reduce noise are highly nonlinear, so that most conventional methods are difficult to apply. The present paper investigates the applicability of PSOA to such problems. Optimal bending design of a vibrating plate using PSOA is performed in order to minimize noise radiation. PSOA can be effectively applied to such nonlinear acoustic radiation optimization.
Optimized System Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Longman, Richard W.
1999-01-01
In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.
Sweeping Jet Optimization Studies
NASA Technical Reports Server (NTRS)
Melton, LaTunia Pack; Koklu, Mehti; Andino, Marlyn; Lin, John C.; Edelman, Louis
2016-01-01
Progress on experimental efforts to optimize sweeping jet actuators for active flow control (AFC) applications with large adverse pressure gradients is reported. Three sweeping jet actuator configurations, with the same orifice size but di?erent internal geometries, were installed on the flap shoulder of an unswept, NACA 0015 semi-span wing to investigate how the output produced by a sweeping jet interacts with the separated flow and the mechanisms by which the flow separation is controlled. For this experiment, the flow separation was generated by deflecting the wing's 30% chord trailing edge flap to produce an adverse pressure gradient. Steady and unsteady pressure data, Particle Image Velocimetry data, and force and moment data were acquired to assess the performance of the three actuator configurations. The actuator with the largest jet deflection angle, at the pressure ratios investigated, was the most efficient at controlling flow separation on the flap of the model. Oil flow visualization studies revealed that the flow field controlled by the sweeping jets was more three-dimensional than expected. The results presented also show that the actuator spacing was appropriate for the pressure ratios examined.
Optimal Phase Oscillatory Network
NASA Astrophysics Data System (ADS)
Follmann, Rosangela
2013-03-01
Important topics as preventive detection of epidemics, collective self-organization, information flow and systemic robustness in clusters are typical examples of processes that can be studied in the context of the theory of complex networks. It is an emerging theory in a field, which has recently attracted much interest, involving the synchronization of dynamical systems associated to nodes, or vertices, of the network. Studies have shown that synchronization in oscillatory networks depends not only on the individual dynamics of each element, but also on the combination of the topology of the connections as well as on the properties of the interactions of these elements. Moreover, the response of the network to small damages, caused at strategic points, can enhance the global performance of the whole network. In this presentation we explore an optimal phase oscillatory network altered by an additional term in the coupling function. The application to associative-memory network shows improvement on the correct information retrieval as well as increase of the storage capacity. The inclusion of some small deviations on the nodes, when solutions are attracted to a false state, results in additional enhancement of the performance of the associative-memory network. Supported by FAPESP - Sao Paulo Research Foundation, grant number 2012/12555-4
Cyclone performance and optimization
Leith, D.
1989-03-15
The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. This quarter, we have been hampered somewhat by flow delivery of the bubble generation system and arc lighting system placed on order last fall. This equipment is necessary to map the flow field within cyclones using the techniques described in last quarter's report. Using the bubble generator, we completed this quarter a study of the natural length'' of cyclones of 18 different configurations, each configuration operated at five different gas flows. Results suggest that the equation by Alexander for natural length is incorrect; natural length as measured with the bubble generation system is always below the bottom of the cyclones regardless of the cyclone configuration or gas flow, within the limits of the experimental cyclones tested. This finding is important because natural length is a term in equations used to predict cyclone efficiency. 1 tab.
Powers, Tom
2013-09-01
This work describes preliminary results of a new software tool that allows one to vary parameters and understand the effects on the optimized costs of construction plus 10 year operations of an SRF linac, the associated cryogenic facility, and controls, where operations includes the cost of the electrical utilities but not the labor or other costs. It derives from collaborative work done with staff from Accelerator Science and Technology Centre, Daresbury, UK several years ago while they were in the process of developing a conceptual design for the New Light Source project.[1] The initial goal was to convert a spread sheet format to a graphical interface to allow the ability to sweep different parameter sets. The tools also allow one to compare the cost of the different facets of the machine design and operations so as to better understand the tradeoffs. The work was first published in an ICFA Beam Dynamics News Letter.[2] More recent additions to the software include the ability to save and restore input parameters as well as to adjust the Qo versus E parameters in order to explore the potential costs savings associated with doing so. Additionally, program changes now allow one to model the costs associated with a linac that makes use of energy recovery mode of operation.
Induction technology optimization code
Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.
1992-08-21
A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. The Induction Technology Optimization Study (ITOS) was undertaken to examine viable combinations of a linear induction accelerator and a relativistic klystron (RK) for high power microwave production. It is proposed, that microwaves from the RK will power a high-gradient accelerator structure for linear collider development. Previous work indicates that the RK will require a nominal 3-MeV, 3-kA electron beam with a 100-ns flat top. The proposed accelerator-RK combination will be a high average power system capable of sustained microwave output at a 300-Hz pulse repetition frequency. The ITOS code models many combinations of injector, accelerator, and pulse power designs that will supply an RK with the beam parameters described above.
Naclerio, R M
1998-12-01
Full and accurate diagnosis of allergic rhinitis is important as a basis for treatment decisions, as many nasal disorders have similar signs and symptoms. Optimal allergen avoidance is the starting point of treatment, so causative allergens need to be identified. Oral antihistamines are effective in relieving the majority of symptoms of allergic rhinitis and allergic conjunctivitis, but provide only partial relief from nasal congestion. Topical alpha-adrenergic decongestants help to relieve congestion, but prolonged use leads to rhinitis medicamentosa. Systemic decongestants are less effective than topical agents and their use is limited by systemic and central side-effects. The value of leukotriene antagonists has yet to be fully evaluated. Intranasal ipratropium bromide helps to control watery secretions, and an aerosol may be more effective than an aqueous solution. Topical glucocorticosteroids, such as triamcinolone, are the most potent and effective agents available for treating allergic rhinitis. The available evidence indicates that there is very little systemic absorption. Sodium cromoglycate is effective in allergic rhinitis, though less so than topical steroids, and has the least adverse effects among the antiallergic agents. Immunotherapy can be effective and may be indicated in individuals who cannot avoid the causative allergen. Special considerations apply to the treatment of allergic rhinitis in elderly or pregnant patients. Finally, patients with long-standing allergic conditions should be re-assessed regularly.
Boiler modeling optimizes sootblowing
Piboontum, S.J.; Swift, S.M.; Conrad, R.S.
2005-10-01
Controlling the cleanliness and limiting the fouling and slagging of heat transfer surfaces are absolutely necessary to optimize boiler performance. The traditional way to clean heat-transfer surfaces is by sootblowing using air, steam, or water at regular intervals. But with the advent of fuel-switching strategies, such as switching to PRB coal to reduce a plant's emissions, the control of heating surface cleanliness has become more problematic for many owners of steam generators. Boiler modeling can help solve that problem. The article describes Babcock & Wilcox's Powerclean modeling system which consists of heating surface models that produce real-time cleanliness indexes. The Heat Transfer Manager (HTM) program is the core of the system, which can be used on any make or model of boiler. A case study is described to show how the system was successfully used at the 1,350 MW Unit 2 of the American Electric Power's Rockport Power Plant in Indiana. The unit fires a blend of eastern bituminous and Powder River Basin coal. 5 figs.
Industrial cogeneration optimization program
Not Available
1980-01-01
The purpose of this program was to identify up to 10 good near-term opportunities for cogeneration in 5 major energy-consuming industries which produce food, textiles, paper, chemicals, and refined petroleum; select, characterize, and optimize cogeneration systems for these identified opportunities to achieve maximum energy savings for minimum investment using currently available components of cogenerating systems; and to identify technical, institutional, and regulatory obstacles hindering the use of industrial cogeneration systems. The analysis methods used and results obtained are described. Plants with fuel demands from 100,000 Btu/h to 3 x 10/sup 6/ Btu/h were considered. It was concluded that the major impediments to industrial cogeneration are financial, e.g., high capital investment and high charges by electric utilities during short-term cogeneration facility outages. In the plants considered an average energy savings from cogeneration of 15 to 18% compared to separate generation of process steam and electric power was calculated. On a national basis for the 5 industries considered, this extrapolates to saving 1.3 to 1.6 quads per yr or between 630,000 to 750,000 bbl/d of oil. Properly applied, federal activity can do much to realize a substantial fraction of this potential by lowering the barriers to cogeneration and by stimulating wider implementation of this technology. (LCL)
Optimizing management of glycaemia.
Chatterjee, Sudesna; Khunti, Kamlesh; Davies, Melanie J
2016-06-01
The global epidemic of type 2 diabetes (T2DM) continues largely unabated due to an increasingly sedentary lifestyle and obesogenic environment. A cost-effective patient-centred approach, incorporating glucose-lowering therapy and modification of cardiovascular risk factors, could help prevent the inevitable development and progression of macrovascular and microvascular complications. Glycaemic optimization requires patient structured education, self-management and empowerment, and psychological support along with early and proactive use of glucose lowering therapies, which should be delivered in a system of care as shown by the Chronic Care Model. From diagnosis, intensive glycaemic control and individualised care is aimed at reducing complications. In older people, the goal is maintaining quality of life and minimizing morbidity, especially as overtreatment increases hypoglycaemia risk. Maintaining durable glycaemic control is challenging and complex to achieve without hypoglycaemia, weight gain and other significant adverse effects. Overcoming patient and physician barriers can help ensure adequate treatment initiation and intensification. Cardiovascular safety studies with newer glucose-lowering agents are now mandatory, with a sodium glucose co-transporter-2 inhibitor (empagliflozin), and two glucagon like peptide-1 receptor agonists (liraglutide and semaglutide) being the first to demonstrate superior CV outcomes compared with placebo. PMID:27432074
Query Evaluation: Strategies and Optimizations.
ERIC Educational Resources Information Center
Turtle, Howard; Flood, James
1995-01-01
Discusses two query evaluation strategies used in large text retrieval systems: (1) term-at-a-time; and (2) document-at-a-time. Describes optimization techniques that can reduce query evaluation costs. Presents simulation results that compare the performance of these optimization techniques when applied to natural language query evaluation. (JMV)
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Supply-Chain Optimization Template
NASA Technical Reports Server (NTRS)
Quiett, William F.; Sealing, Scott L.
2009-01-01
The Supply-Chain Optimization Template (SCOT) is an instructional guide for identifying, evaluating, and optimizing (including re-engineering) aerospace- oriented supply chains. The SCOT was derived from the Supply Chain Council s Supply-Chain Operations Reference (SCC SCOR) Model, which is more generic and more oriented toward achieving a competitive advantage in business.
Optimizing Medical Kits for Spaceflight
NASA Technical Reports Server (NTRS)
Keenan, A. B,; Foy, Millennia; Myers, G.
2014-01-01
The Integrated Medical Model (IMM) is a probabilistic model that estimates medical event occurrences and mission outcomes for different mission profiles. IMM simulation outcomes describing the impact of medical events on the mission may be used to optimize the allocation of resources in medical kits. Efficient allocation of medical resources, subject to certain mass and volume constraints, is crucial to ensuring the best outcomes of in-flight medical events. We implement a new approach to this medical kit optimization problem. METHODS We frame medical kit optimization as a modified knapsack problem and implement an algorithm utilizing a dynamic programming technique. Using this algorithm, optimized medical kits were generated for 3 different mission scenarios with the goal of minimizing the probability of evacuation and maximizing the Crew Health Index (CHI) for each mission subject to mass and volume constraints. Simulation outcomes using these kits were also compared to outcomes using kits optimized..RESULTS The optimized medical kits generated by the algorithm described here resulted in predicted mission outcomes more closely approached the unlimited-resource scenario for Crew Health Index (CHI) than the implementation in under all optimization priorities. Furthermore, the approach described here improves upon in reducing evacuation when the optimization priority is minimizing the probability of evacuation. CONCLUSIONS This algorithm provides an efficient, effective means to objectively allocate medical resources for spaceflight missions using the Integrated Medical Model.
Optimal Distinctiveness Signals Membership Trust.
Leonardelli, Geoffrey J; Loyd, Denise Lewin
2016-07-01
According to optimal distinctiveness theory, sufficiently small minority groups are associated with greater membership trust, even among members otherwise unknown, because the groups are seen as optimally distinctive. This article elaborates on the prediction's motivational and cognitive processes and tests whether sufficiently small minorities (defined by relative size; for example, 20%) are associated with greater membership trust relative to mere minorities (45%), and whether such trust is a function of optimal distinctiveness. Two experiments, examining observers' perceptions of minority and majority groups and using minimal groups and (in Experiment 2) a trust game, revealed greater membership trust in minorities than majorities. In Experiment 2, participants also preferred joining minorities over more powerful majorities. Both effects occurred only when minorities were 20% rather than 45%. In both studies, perceptions of optimal distinctiveness mediated effects. Discussion focuses on the value of relative size and optimal distinctiveness, and when membership trust manifests. PMID:27140657
Optimized layout generator for microgyroscope
NASA Astrophysics Data System (ADS)
Tay, Francis E.; Li, Shifeng; Logeeswaran, V. J.; Ng, David C.
2000-10-01
This paper presents an optimized out-of-plane microgyroscope layout generator using AutoCAD R14 and MS ExcelTM as a first attempt to automating the design of resonant micro- inertial sensors. The out-of-plane microgyroscope with two degrees of freedom lumped parameter model was chosen as the synthesis topology. Analytical model for the open loop operating has been derived for the gyroscope performance characteristics. Functional performance parameters such as sensitivity are ensured to be satisfied while simultaneously optimizing a design objective such as minimum area. A single algorithm will optimize the microgyroscope dimensions, while simultaneously maximizing or minimizing the objective functions: maximum sensitivity and minimum area. The multi- criteria objective function and optimization methodology was implemented using the Generalized Reduced Gradient algorithm. For data conversion a DXF to GDS converter was used. The optimized theoretical design performance parameters show good agreement with finite element analysis.
Optimal dynamic detection of explosives
Moore, David Steven; Mcgrane, Shawn D; Greenfield, Margo T; Scharff, R J; Rabitz, Herschel A; Roslund, J
2009-01-01
The detection of explosives is a notoriously difficult problem, especially at stand-off distances, due to their (generally) low vapor pressure, environmental and matrix interferences, and packaging. We are exploring optimal dynamic detection to exploit the best capabilities of recent advances in laser technology and recent discoveries in optimal shaping of laser pulses for control of molecular processes to significantly enhance the standoff detection of explosives. The core of the ODD-Ex technique is the introduction of optimally shaped laser pulses to simultaneously enhance sensitivity of explosives signatures while reducing the influence of noise and the signals from background interferents in the field (increase selectivity). These goals are being addressed by operating in an optimal nonlinear fashion, typically with a single shaped laser pulse inherently containing within it coherently locked control and probe sub-pulses. With sufficient bandwidth, the technique is capable of intrinsically providing orthogonal broad spectral information for data fusion, all from a single optimal pulse.
Hansborough, L.; Hamm, R.; Stovall, J.; Swenson, D.
1980-01-01
PIGMI (Pion Generator for Medical Irradiations) is a compact linear proton accelerator design, optimized for pion production and cancer treatment use in a hospital environment. Technology developed during a four-year PIGMI Prototype experimental program allows the design of smaller, less expensive, and more reliable proton linacs. A new type of low-energy accelerating structure, the radio-frequency quadrupole (RFQ) has been tested; it produces an exceptionally good-quality beam and allows the use of a simple 30-kV injector. Average axial electric-field gradients of over 9 MV/m have been demonstrated in a drift-tube linac (DTL) structure. Experimental work is underway to test the disk-and-washer (DAW) structure, another new type of accelerating structure for use in the high-energy coupled-cavity linac (CCL). Sufficient experimental and developmental progress has been made to closely define an actual PIGMI. It will consist of a 30-kV injector, and RFQ linac to a proton energy of 2.5 MeV, a DTL linac to 125 MeV, and a CCL linac to the final energy of 650 MeV. The total length of the accelerator is 133 meters. The RFQ and DTL will be driven by a single 440-MHz klystron; the CCL will be driven by six 1320-MHz klystrons. The peak beam current is 28 mA. The beam pulse length is 60 ..mu..s at a 60-Hz repetition rate, resulting in a 100-..mu..A average beam current. The total cost of the accelerator is estimated to be approx. $10 million.
Optimality criteria solution strategies in multiple constraint design optimization
NASA Technical Reports Server (NTRS)
Levy, R.; Parzynski, W.
1981-01-01
Procedures and solution strategies are described to solve the conventional structural optimization problem using the Lagrange multiplier technique. The multipliers, obtained through solution of an auxiliary nonlinear optimization problem, lead to optimality criteria to determine the design variables. It is shown that this procedure is essentially equivalent to an alternative formulation using a dual method Lagrangian function objective. Although mathematical formulations are straight-forward, successful applications and computational efficiency depend upon execution procedure strategies. Strategies examined, with application examples, include selection of active constraints, move limits, line search procedures, and side constraint boundaries.
Optimizing WFIRST Coronagraph Science
NASA Astrophysics Data System (ADS)
Macintosh, Bruce
We propose an in-depth scientific investigation that will define how the WFIRST coronagraphic instrument will discover and characterize nearby planetary systems and how it will use observations of planets and disks to probe the diversity of their compositions, dynamics, and formation. Given the enormous diversity of known planetary systems it is not enough to optimize a coronagraph mission plan for the characterization of solar system analogs. Instead, we must design a mission to characterize a wide variety of planets, from gas and ice giant planets at a range of separations to mid-sized planets with no analogs in our solar system. We must consider updated planet distributions based on the results of the Kepler mission, long-term radial velocity (RV) surveys and updated luminosity distributions of exo-zodiacal dust from interferometric thermal infrared surveys of nearby stars. The properties of all these objects must be informed by our best models of planets and disks, and the process of using WFIRST observations to measure fundamental planetary properties such as composition must derive from rigorous methods. Our team brings a great depth of expertise to inform and accomplish these and all of the other tasks enumerated in the SIT proposal call. We will perform end-to-end modeling that starts with model spectra of planets and images of disks, simulates WFIRST data using these models, accounts for geometries of specific star / planet / disk systems, and incorporates detailed instrument performance models. We will develop and implement data analysis techniques to extract well-calibrated astrophysical signals from complex data, and propose observing plans that maximize the mission's scientific yield. We will work with the community to build observing programs and target lists, inform them of WFIRSTs capabilities, and supply simulated scientific observations for data challenges. Our work will be informed by the experience we have gained from building and observing with
Optimal Protocols and Optimal Transport in Stochastic Thermodynamics
NASA Astrophysics Data System (ADS)
Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo
2011-06-01
Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.
Aircraft technology portfolio optimization using ant colony optimization
NASA Astrophysics Data System (ADS)
Villeneuve, Frederic J.; Mavris, Dimitri N.
2012-11-01
Technology portfolio selection is a combinatorial optimization problem often faced with a large number of combinations and technology incompatibilities. The main research question addressed in this article is to determine if Ant Colony Optimization (ACO) is better suited than Genetic Algorithms (GAs) and Simulated Annealing (SA) for technology portfolio optimization when incompatibility constraints between technologies are present. Convergence rate, capability to find optima, and efficiency in handling of incompatibilities are the three criteria of comparison. The application problem consists of finding the best technology portfolio from 29 aircraft technologies. The results show that ACO and GAs converge faster and find optima more easily than SA, and that ACO can optimize portfolios with technology incompatibilities without using penalty functions. This latter finding paves the way for more use of ACO when the number of constraints increases, such as in the technology and concept selection for complex engineering systems.
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao
Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
Metabolism at evolutionary optimal States.
Rabbers, Iraes; van Heerden, Johan H; Nordholt, Niclas; Bachmann, Herwig; Teusink, Bas; Bruggeman, Frank J
2015-01-01
Metabolism is generally required for cellular maintenance and for the generation of offspring under conditions that support growth. The rates, yields (efficiencies), adaptation time and robustness of metabolism are therefore key determinants of cellular fitness. For biotechnological applications and our understanding of the evolution of metabolism, it is necessary to figure out how the functional system properties of metabolism can be optimized, via adjustments of the kinetics and expression of enzymes, and by rewiring metabolism. The trade-offs that can occur during such optimizations then indicate fundamental limits to evolutionary innovations and bioengineering. In this paper, we review several theoretical and experimental findings about mechanisms for metabolic optimization. PMID:26042723
MPQC: Performance Analysis and Optimization
Sarje, Abhinav; Williams, Samuel; Bailey, David
2012-11-30
MPQC (Massively Parallel Quantum Chemistry) is a widely used computational quantum chemistry code. It is capable of performing a number of computations commonly occurring in quantum chemistry. In order to achieve better performance of MPQC, in this report we present a detailed performance analysis of this code. We then perform loop and memory access optimizations, and measure performance improvements by comparing the performance of the optimized code with that of the original MPQC code. We observe that the optimized MPQC code achieves a significant improvement in the performance through a better utilization of vector processing and memory hierarchies.
Metabolism at Evolutionary Optimal States
Rabbers, Iraes; van Heerden, Johan H.; Nordholt, Niclas; Bachmann, Herwig; Teusink, Bas; Bruggeman, Frank J.
2015-01-01
Metabolism is generally required for cellular maintenance and for the generation of offspring under conditions that support growth. The rates, yields (efficiencies), adaptation time and robustness of metabolism are therefore key determinants of cellular fitness. For biotechnological applications and our understanding of the evolution of metabolism, it is necessary to figure out how the functional system properties of metabolism can be optimized, via adjustments of the kinetics and expression of enzymes, and by rewiring metabolism. The trade-offs that can occur during such optimizations then indicate fundamental limits to evolutionary innovations and bioengineering. In this paper, we review several theoretical and experimental findings about mechanisms for metabolic optimization. PMID:26042723
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
An Efficient Chemical Reaction Optimization Algorithm for Multiobjective Optimization.
Bechikh, Slim; Chaabani, Abir; Ben Said, Lamjed
2015-10-01
Recently, a new metaheuristic called chemical reaction optimization was proposed. This search algorithm, inspired by chemical reactions launched during collisions, inherits several features from other metaheuristics such as simulated annealing and particle swarm optimization. This fact has made it, nowadays, one of the most powerful search algorithms in solving mono-objective optimization problems. In this paper, we propose a multiobjective variant of chemical reaction optimization, called nondominated sorting chemical reaction optimization, in an attempt to exploit chemical reaction optimization features in tackling problems involving multiple conflicting criteria. Since our approach is based on nondominated sorting, one of the main contributions of this paper is the proposal of a new quasi-linear average time complexity quick nondominated sorting algorithm; thereby making our multiobjective algorithm efficient from a computational cost viewpoint. The experimental comparisons against several other multiobjective algorithms on a variety of benchmark problems involving various difficulties show the effectiveness and the efficiency of this multiobjective version in providing a well-converged and well-diversified approximation of the Pareto front.
Optimality principles in sensorimotor control.
Todorov, Emanuel
2004-09-01
The sensorimotor system is a product of evolution, development, learning and adaptation-which work on different time scales to improve behavioral performance. Consequently, many theories of motor function are based on 'optimal performance': they quantify task goals as cost functions, and apply the sophisticated tools of optimal control theory to obtain detailed behavioral predictions. The resulting models, although not without limitations, have explained more empirical phenomena than any other class. Traditional emphasis has been on optimizing desired movement trajectories while ignoring sensory feedback. Recent work has redefined optimality in terms of feedback control laws, and focused on the mechanisms that generate behavior online. This approach has allowed researchers to fit previously unrelated concepts and observations into what may become a unified theoretical framework for interpreting motor function. At the heart of the framework is the relationship between high-level goals, and the real-time sensorimotor control strategies most suitable for accomplishing those goals.
Montenegro-Johnson, Thomas D; Lauga, Eric
2014-06-01
Propulsion at microscopic scales is often achieved through propagating traveling waves along hairlike organelles called flagella. Taylor's two-dimensional swimming sheet model is frequently used to provide insight into problems of flagellar propulsion. We derive numerically the large-amplitude wave form of the two-dimensional swimming sheet that yields optimum hydrodynamic efficiency: the ratio of the squared swimming speed to the rate-of-working of the sheet against the fluid. Using the boundary element method, we show that the optimal wave form is a front-back symmetric regularized cusp that is 25% more efficient than the optimal sine wave. This optimal two-dimensional shape is smooth, qualitatively different from the kinked form of Lighthill's optimal three-dimensional flagellum, not predicted by small-amplitude theory, and different from the smooth circular-arc-like shape of active elastic filaments. PMID:25019709
Energy Criteria for Resource Optimization
ERIC Educational Resources Information Center
Griffith, J. W.
1973-01-01
Resource optimization in building design is based on the total system over its expected useful life. Alternative environmental systems can be evaluated in terms of resource costs and goal effectiveness. (Author/MF)
Data Understanding Applied to Optimization
NASA Technical Reports Server (NTRS)
Buntine, Wray; Shilman, Michael
1998-01-01
The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.
Nonlinear optimization for stochastic simulations.
Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.
2003-12-01
This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.
NASA Astrophysics Data System (ADS)
Montenegro-Johnson, Thomas D.; Lauga, Eric
2014-06-01
Propulsion at microscopic scales is often achieved through propagating traveling waves along hairlike organelles called flagella. Taylor's two-dimensional swimming sheet model is frequently used to provide insight into problems of flagellar propulsion. We derive numerically the large-amplitude wave form of the two-dimensional swimming sheet that yields optimum hydrodynamic efficiency: the ratio of the squared swimming speed to the rate-of-working of the sheet against the fluid. Using the boundary element method, we show that the optimal wave form is a front-back symmetric regularized cusp that is 25% more efficient than the optimal sine wave. This optimal two-dimensional shape is smooth, qualitatively different from the kinked form of Lighthill's optimal three-dimensional flagellum, not predicted by small-amplitude theory, and different from the smooth circular-arc-like shape of active elastic filaments.
Two concepts of therapeutic optimism.
Jansen, Lynn A
2011-09-01
Researchers and ethicists have long been concerned about the expectations for direct medical benefit expressed by participants in early phase clinical trials. Early work on the issue considered the possibility that participants misunderstand the purpose of clinical research or that they are misinformed about the prospects for medical benefit from these trials. Recently, however, attention has turned to the possibility that research participants are simply expressing optimism or hope about their participation in these trials. The ethical significance of this therapeutic optimism remains unclear. This paper argues that there are two distinct phenomena that can be associated with the term 'therapeutic optimism'-one is ethically benign and the other is potentially worrisome. Distinguishing these two phenomena is crucial for understanding the nature and ethical significance of therapeutic optimism. The failure to draw a distinction between these phenomena also helps to explain why different writers on the topic often speak past one another.
Techniques for shuttle trajectory optimization
NASA Technical Reports Server (NTRS)
Edge, E. R.; Shieh, C. J.; Powers, W. F.
1973-01-01
The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.
Habitat Design Optimization and Analysis
NASA Technical Reports Server (NTRS)
SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.
2006-01-01
Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.
Optimal solar sail planetocentric trajectories
NASA Technical Reports Server (NTRS)
Sackett, L. L.
1977-01-01
The analysis of solar sail planetocentric optimal trajectory problem is described. A computer program was produced to calculate optimal trajectories for a limited performance analysis. A square sail model is included and some consideration is given to a heliogyro sail model. Orbit to a subescape point and orbit to orbit transfer are considered. Trajectories about the four inner planets can be calculated and shadowing, oblateness, and solar motion may be included. Equinoctial orbital elements are used to avoid the classical singularities, and the method of averaging is applied to increase computational speed. Solution of the two-point boundary value problem which arises from the application of optimization theory is accomplished with a Newton procedure. Time optimal trajectories are emphasized, but a penalty function has been considered to prevent trajectories which intersect a planet's surface.
Geometric optimization of thermal systems
NASA Astrophysics Data System (ADS)
Alebrahim, Asad Mansour
2000-10-01
The work in chapter 1 extends to three dimensions and to convective heat transfer the constructal method of minimizing the thermal resistance between a volume and one point. In the first part, the heat flow mechanism is conduction, and the heat generating volume is occupied by low conductivity material (k 0) and high conductivity inserts (kp) that are shaped as constant-thickness disks mounted on a common stem of kp material. In the second part the interstitial spaces once occupied by k0 material are bathed by forced convection. The internal and external geometric aspect ratios of the elemental volume and the first assembly are optimized numerically subject to volume constraints. Chapter 2 presents the constrained thermodynamic optimization of a cross-flow heat exchanger with ram air on the cold side, which is used in the environmental control systems of aircraft. Optimized geometric features such as the ratio of channel spacings and flow lengths are reported. It is found that the optimized features are relatively insensitive to changes in other physical parameters of the installation and relatively insensitive to the additional irreversibility due to discharging the ram-air stream into the atmosphere, emphasizing the robustness of the thermodynamic optimum. In chapter 3 the problem of maximizing exergy extraction from a hot stream by distributing streams over a heat transfer surface is studied. In the first part, the cold stream is compressed in an isothermal compressor, expanded in an adiabatic turbine, and discharged into the ambient. In the second part, the cold stream is compressed in an adiabatic compressor. Both designs are optimized with respect to the capacity-rate imbalance of the counter-flow and the pressure ratio maintained by the compressor. This study shows the tradeoff between simplicity and increased performance, and outlines the path for further conceptual work on the extraction of exergy from a hot stream that is being cooled gradually. The aim
Numerical Optimization Using Computer Experiments
NASA Technical Reports Server (NTRS)
Trosset, Michael W.; Torczon, Virginia
1997-01-01
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY
BERGMAN, T. B.; STEFANSKI, L. D.; SEELEY, P. N.; ZINSLI, L. C.; CUSACK, L. J.
2012-09-19
THE CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY WAS CONDUCTED TO DEVELOP AN OPTIMAL SEQUENCE OF REMEDIATION ACTIVITIES IMPLEMENTING THE CERCLA DECISION ON THE CENTRAL PLATEAU. THE STUDY DEFINES A SEQUENCE OF ACTIVITIES THAT RESULT IN AN EFFECTIVE USE OF RESOURCES FROM A STRATEGIC PERSPECTIVE WHEN CONSIDERING EQUIPMENT PROCUREMENT AND STAGING, WORKFORCE MOBILIZATION/DEMOBILIZATION, WORKFORCE LEVELING, WORKFORCE SKILL-MIX, AND OTHER REMEDIATION/DISPOSITION PROJECT EXECUTION PARAMETERS.
Methods to optimize selective hyperthermia
NASA Astrophysics Data System (ADS)
Cowan, Thomas M.; Bailey, Christopher A.; Liu, Hong; Chen, Wei R.
2003-07-01
Laser immunotherapy, a novel therapy for breast cancer, utilizes selective photothermal interaction to raise the temperature of tumor tissue above the cell damage threshold. Photothermal interaction is achieved with intratumoral injection of a laser absorbing dye followed by non-invasive laser irradiation. When tumor heating is used in combination with immunoadjuvant to stimulate an immune response, anti-tumor immunity can be achieved. In our study, gelatin phantom simulations were used to optimize therapy parameters such as laser power, laser beam radius, and dye concentration to achieve maximum heating of target tissue with the minimum heating of non-targeted tissue. An 805-nm diode laser and indocyanine green (ICG) were used to achieve selective photothermal interactions in a gelatin phantom. Spherical gelatin phantoms containing ICG were used to simulate the absorption-enhanced target tumors, which were embedded inside gelatin without ICG to simulate surrounding non-targeted tissue. Different laser powers and dye concentrations were used to treat the gelatin phantoms. The temperature distributions in the phantoms were measured, and the data were used to determine the optimal parameters used in selective hyperthermia (laser power and dye concentration for this case). The method involves an optimization coefficient, which is proportional to the difference between temperatures measured in targeted and non-targeted gel. The coefficient is also normalized by the difference between the most heated region of the target gel and the least heated region. A positive optimization coefficient signifies a greater temperature increase in targeted gelatin when compared to non-targeted gelatin, and therefore, greater selectivity. Comparisons were made between the optimization coefficients for varying laser powers in order to demonstrate the effectinvess of this method in finding an optimal parameter set. Our experimental results support the proposed use of an optimization
SWOC: Spectral Wavelength Optimization Code
NASA Astrophysics Data System (ADS)
Ruchti, G. R.
2016-06-01
SWOC (Spectral Wavelength Optimization Code) determines the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a spectroscopic study. It computes a figure-of-merit for different spectral configurations using a user-defined list of spectral features, and, utilizing a set of flux-calibrated spectra, determines the spectral regions showing the largest differences among the spectra.
Optimal BLS: Optimizing transit-signal detection for Keplerian dynamics
NASA Astrophysics Data System (ADS)
Ofir, Aviv
2015-08-01
Transit surveys, both ground- and space-based, have already accumulated a large number of light curves that span several years. We optimize the search for transit signals for both detection and computational efficiencies by assuming that the searched systems can be described by Keplerian, and propagating the effects of different system parameters to the detection parameters. Importnantly, we mainly consider the information content of the transit signal and not any specific algorithm - and use BLS (Kovács, Zucker, & Mazeh 2002) just as a specific example.We show that the frequency information content of the light curve is primarily determined by the duty cycle of the transit signal, and thus the optimal frequency sampling is found to be cubic and not linear. Further optimization is achieved by considering duty-cycle dependent binning of the phased light curve. By using the (standard) BLS, one is either fairly insensitive to long-period planets or less sensitive to short-period planets and computationally slower by a significant factor of ~330 (for a 3 yr long dataset). We also show how the physical system parameters, such as the host star's size and mass, directly affect transit detection. This understanding can then be used to optimize the search for every star individually.By considering Keplerian dynamics explicitly rather than implicitly one can optimally search the transit signal parameter space. The presented Optimal BLS enhances the detectability of both very short and very long period planets, while allowing such searches to be done with much reduced resources and time. The Matlab/Octave source code for Optimal BLS is made available.
Unrealistic Optimism: East and West?
Joshi, Mary Sissons; Carter, Wakefield
2013-01-01
Following Weinstein’s (1980) pioneering work many studies established that people have an optimistic bias concerning future life events. At first, the bulk of research was conducted using populations in North America and Northern Europe, the optimistic bias was thought of as universal, and little attention was paid to cultural context. However, construing unrealistic optimism as a form of self-enhancement, some researchers noted that it was far less common in East Asian cultures. The current study extends enquiry to a different non-Western culture. Two hundred and eighty seven middle aged and middle income participants (200 in India, 87 in England) rated 11 positive and 11 negative events in terms of the chances of each event occurring in “their own life,” and the chances of each event occurring in the lives of “people like them.” Comparative optimism was shown for bad events, with Indian participants showing higher levels of optimism than English participants. The position regarding comparative optimism for good events was more complex. In India those of higher socioeconomic status (SES) were optimistic, while those of lower SES were on average pessimistic. Overall, English participants showed neither optimism nor pessimism for good events. The results, whose clinical relevance is discussed, suggest that the expression of unrealistic optimism is shaped by an interplay of culture and socioeconomic circumstance. PMID:23407689
Efficient computation of optimal actions.
Todorov, Emanuel
2009-07-14
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.
Efficient computation of optimal actions
Todorov, Emanuel
2009-01-01
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462
Optimized quadrature surface coil designs
Kumar, Ananda; Bottomley, Paul A.
2008-01-01
Background Quadrature surface MRI/MRS detectors comprised of circular loop and figure-8 or butterfly-shaped coils offer improved signal-to-noise-ratios (SNR) compared to single surface coils, and reduced power and specific absorption rates (SAR) when used for MRI excitation. While the radius of the optimum loop coil for performing MRI at depth d in a sample is known, the optimum geometry for figure-8 and butterfly coils is not. Materials and methods The geometries of figure-8 and square butterfly detector coils that deliver the optimum SNR are determined numerically by the electromagnetic method of moments. Figure-8 and loop detectors are then combined to create SNR-optimized quadrature detectors whose theoretical and experimental SNR performance are compared with a novel quadrature detector comprised of a strip and a loop, and with two overlapped loops optimized for the same depth at 3 T. The quadrature detection efficiency and local SAR during transmission for the three quadrature configurations are analyzed and compared. Results The SNR-optimized figure-8 detector has loop radius r8 ∼ 0.6d, so r8/r0 ∼ 1.3 in an optimized quadrature detector at 3 T. The optimized butterfly coil has side length ∼ d and crossover angle of ≥ 150° at the center. Conclusions These new design rules for figure-8 and butterfly coils optimize their performance as linear and quadrature detectors. PMID:18057975
Optimal lattice-structured materials
Messner, Mark C.
2016-07-09
This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less
Pyomo : Python Optimization Modeling Objects.
Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul
2010-11-01
The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.
Optimal control of motorsport differentials
NASA Astrophysics Data System (ADS)
Tremlett, A. J.; Massaro, M.; Purdy, D. J.; Velenis, E.; Assadian, F.; Moore, A. P.; Halley, M.
2015-12-01
Modern motorsport limited slip differentials (LSD) have evolved to become highly adjustable, allowing the torque bias that they generate to be tuned in the corner entry, apex and corner exit phases of typical on-track manoeuvres. The task of finding the optimal torque bias profile under such varied vehicle conditions is complex. This paper presents a nonlinear optimal control method which is used to find the minimum time optimal torque bias profile through a lane change manoeuvre. The results are compared to traditional open and fully locked differential strategies, in addition to considering related vehicle stability and agility metrics. An investigation into how the optimal torque bias profile changes with reduced track-tyre friction is also included in the analysis. The optimal LSD profile was shown to give a performance gain over its locked differential counterpart in key areas of the manoeuvre where a quick direction change is required. The methodology proposed can be used to find both optimal passive LSD characteristics and as the basis of a semi-active LSD control algorithm.
Displacement based multilevel structural optimization
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1995-01-01
Multidisciplinary design optimization (MDO) is expected to play a major role in the competitive transportation industries of tomorrow, i.e., in the design of aircraft and spacecraft, of high speed trains, boats, and automobiles. All of these vehicles require maximum performance at minimum weight to keep fuel consumption low and conserve resources. Here, MDO can deliver mathematically based design tools to create systems with optimum performance subject to the constraints of disciplines such as structures, aerodynamics, controls, etc. Although some applications of MDO are beginning to surface, the key to a widespread use of this technology lies in the improvement of its efficiency. This aspect is investigated here for the MDO subset of structural optimization, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures (here, statically indeterminate trusses and beams for proof of concept) is performed. In the system level optimization, the design variables are the coefficients of assumed displacement functions, and the load unbalance resulting from the solution of the stiffness equations is minimized. Constraints are placed on the deflection amplitudes and the weight of the structure. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. This approach is expected to prove very efficient, especially for complex structures, since the design task is broken down into a large number of small and efficiently handled subtasks, each with only a small number of variables. This partitioning will also allow for the use of parallel computing, first, by sending the system and subsystems level computations to two different processors, ultimately, by performing all subsystems level optimizations in a massively parallel manner on separate
A novel metaheuristic for continuous optimization problems: Virus optimization algorithm
NASA Astrophysics Data System (ADS)
Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue
2016-01-01
A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.
Schedule path optimization for adiabatic quantum computing and optimization
NASA Astrophysics Data System (ADS)
Zeng, Lishan; Zhang, Jun; Sarovar, Mohan
2016-04-01
Adiabatic quantum computing and optimization have garnered much attention recently as possible models for achieving a quantum advantage over classical approaches to optimization and other special purpose computations. Both techniques are probabilistic in nature and the minimum gap between the ground state and first excited state of the system during evolution is a major factor in determining the success probability. In this work we investigate a strategy for increasing the minimum gap and success probability by introducing intermediate Hamiltonians that modify the evolution path between initial and final Hamiltonians. We focus on an optimization problem relevant to recent hardware implementations and present numerical evidence for the existence of a purely local intermediate Hamiltonian that achieve the optimum performance in terms of pushing the minimum gap to one of the end points of the evolution. As a part of this study we develop a convex optimization formulation of the search for optimal adiabatic schedules that makes this computation more tractable, and which may be of independent interest. We further study the effectiveness of random intermediate Hamiltonians on the minimum gap and success probability, and empirically find that random Hamiltonians have a significant probability of increasing the success probability, but only by a modest amount.
Optimal singular control with applications to trajectory optimization
NASA Technical Reports Server (NTRS)
Vinh, N. X.
1977-01-01
A comprehensive discussion of the problem of singular control is presented. Singular control enters an optimal trajectory when the so called switching function vanishes identically over a finite time interval. Using the concept of domain of maneuverability, the problem of optical switching is analyzed. Criteria for the optimal direction of switching are presented. The switching, or junction, between nonsingular and singular subarcs is examined in detail. Several theorems concerning the necessary, and also sufficient conditions for smooth junction are presented. The concepts of quasi-linear control and linearized control are introduced. They are designed for the purpose of obtaining approximate solution for the difficult Euler-Lagrange type of optimal control in the case where the control is nonlinear.
Noncooperatively optimized tolerance: decentralized strategic optimization in complex systems.
Vorobeychik, Yevgeniy; Mayo, Jackson R; Armstrong, Robert C; Ruthruff, Joseph R
2011-09-01
We introduce noncooperatively optimized tolerance (NOT), a game theoretic generalization of highly optimized tolerance (HOT), which we illustrate in the forest fire framework. As the number of players increases, NOT retains features of HOT, such as robustness and self-dissimilar landscapes, but also develops features of self-organized criticality. The system retains considerable robustness even as it becomes fractured, due in part to emergent cooperation between players, and at the same time exhibits increasing resilience against changes in the environment, giving rise to intermediate regimes where the system is robust to a particular distribution of adverse events, yet not very fragile to changes. PMID:21981540
Four-body trajectory optimization. [fuel optimal computer programs
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1975-01-01
The two methods which are suitable for use in a 4-body trajectory optimization program are both multiconic methods. They include an approach due to Wilson (1970) and to Byrnes and Hooper (1970) and a procedure developed by Stumpff and Weiss (1968). The various steps in a trajectory optimization program are discussed, giving attention to variable step integration, the correction of errors by quadrature formulas, questions of two-impulse transfer, three-impulse transfer, and two examples which illustrate the implementation of the computational approaches.
Optimization of dish solar collectors
NASA Technical Reports Server (NTRS)
Jaffe, L. D.
1983-01-01
Methods for optimizing parabolic dish solar collectors and the consequent effects of various optical, thermal, mechanical, and cost variables are examined. The most important performance optimization is adjusting the receiver aperture to maximize collector efficiency. Other parameters that can be adjusted to optimize efficiency include focal length, and, if a heat engine is used, the receiver temperature. The efficiency maxima associated with focal length and receiver temperature are relatively broad; it may, accordingly, be desirable to design somewhat away from the maxima. Performance optimization is sensitive to the slope and specularity errors of the concentrator. Other optical and thermal variables affecting optimization are the reflectance and blocking factor of the concentrator, the absorptance and losses of the receiver, and, if a heat engine is used, the shape of the engine efficiency versus temperature curve. Performance may sometimes be improved by use of an additional optical element (a secondary concentrator) or a receiver window if the errors of the primary concentrator are large or the receiver temperature is high. Previously announced in STAR as N83-19224
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
Pareto optimal pairwise sequence alignment.
DeRonne, Kevin W; Karypis, George
2013-01-01
Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.
Theory of Optimal Human Motion
NASA Astrophysics Data System (ADS)
Chan, Albert Loongtak
1990-01-01
This thesis presents optimal theories for punching and running. The first is a theory of the optimal karate punch in terms of the duration and the speed of the punch. This theory is solved and compared with experimental data. The theory incorporates the force vs velocity equation (Hill's eq.) and Wilkie's equation for elbow flexation in determining the optimal punch. The time T and the final speed of the punch are dependent on a few physiological parameters for arm muscles. The theoretical punch agrees fairly well with our experiments and other independent experiments. Second, a theory of optimal running is presented, solved and compared with world track records. The theory is similar to Keller's theory for running (1973) except that the power consumed by a runner is assumed to be proportional to the runner's speed v, P = Hv, whereas Keller took P = constant. There are differential equations for velocity and energy, two initial conditions and two constraint inequalities, involving a total of four free parameters. Optimal control techniques are used to solve this problem and minimize the running time T given the race distance D. The resultant predicted times T agree well with the records and the parameter values are consistent with independent physiological measurements.
On optimal velocity during cycling.
Maroński, R
1994-02-01
This paper focuses on the solution of two problems related to cycling. One is to determine the velocity as a function of distance which minimizes the cyclist's energy expenditure in covering a given distance in a set time. The other is to determine the velocity as a function of the distance which minimizes time for fixed energy expenditure. To solve these problems, an equation of motion for the cyclist riding over arbitrary terrain is written using Newton's second law. This equation is used to evaluate either energy expenditure or time, and the minimization problems are solved using an optimal control formulation in conjunction with the method of Miele [Optimization Techniques with Applications to Aerospace Systems, pp. 69-98 (1962) Academic Press, New York]. Solutions to both optimal control problems are the same. The solutions are illustrated through two examples. In one example where the relative wind velocity is zero, the optimal cruising velocity is constant regardless of terrain. In the second, where the relative wind velocity fluctuates, the optimal cruising velocity varies.
Machine Translation Evaluation and Optimization
NASA Astrophysics Data System (ADS)
Dorr, Bonnie; Olive, Joseph; McCary, John; Christianson, Caitlin
The evaluation of machine translation (MT) systems is a vital field of research, both for determining the effectiveness of existing MT systems and for optimizing the performance of MT systems. This part describes a range of different evaluation approaches used in the GALE community and introduces evaluation protocols and methodologies used in the program. We discuss the development and use of automatic, human, task-based and semi-automatic (human-in-the-loop) methods of evaluating machine translation, focusing on the use of a human-mediated translation error rate HTER as the evaluation standard used in GALE. We discuss the workflow associated with the use of this measure, including post editing, quality control, and scoring. We document the evaluation tasks, data, protocols, and results of recent GALE MT Evaluations. In addition, we present a range of different approaches for optimizing MT systems on the basis of different measures. We outline the requirements and specific problems when using different optimization approaches and describe how the characteristics of different MT metrics affect the optimization. Finally, we describe novel recent and ongoing work on the development of fully automatic MT evaluation metrics that have the potential to substantially improve the effectiveness of evaluation and optimization of MT systems.
Optimizing Stellarators for Turbulent Transport
H.E. Mynick, N.Pomphrey, and P. Xanthopoulos
2010-05-27
Up to now, the term "transport-optimized" stellarators has meant optimized to minimize neoclassical transport, while the task of also mitigating turbulent transport, usually the dominant transport channel in such designs, has not been addressed, due to the complexity of plasma turbulence in stellarators. Here, we demonstrate that stellarators can also be designed to mitigate their turbulent transport, by making use of two powerful numerical tools not available until recently, namely gyrokinetic codes valid for 3D nonlinear simulations, and stellarator optimization codes. A first proof-of-principle configuration is obtained, reducing the level of ion temperature gradient turbulent transport from the NCSX baseline design by a factor of about 2.5.
Fuel consumption in optimal control
NASA Technical Reports Server (NTRS)
Redmond, Jim; Silverberg, Larry
1992-01-01
A method has been developed for comparing three optimal control strategies based on fuel consumption. A general cost function minimization procedure was developed by applying two theorems associated with convex sets. Three cost functions associated with control saturation, pseudofuel, and absolute fuel are introduced and minimized. The first two cost functions led to the bang-bang and continuous control strategies, and the minimization of absolute fuel led to an impulsive strategy. The three control strategies were implemented on two elementary systems and a comparison of fuel consumption was made. The impulse control strategy consumes significantly less fuel than the continuous and bang-bang control strategies. This comparison suggests a potential for fuel savings in higher-order systems using impulsive control strategies. However, since exact solutions to fuel-optimal control for large-order systems are difficult if not impossible to achieve, the alternative is to develop near-optimal control strategies.
Optimal randomized scheduling by replacement
Saias, I.
1996-05-01
In the replacement scheduling problem, a system is composed of n processors drawn from a pool of p. The processors can become faulty while in operation and faulty processors never recover. A report is issued whenever a fault occurs. This report states only the existence of a fault but does not indicate its location. Based on this report, the scheduler can reconfigure the system and choose another set of n processors. The system operates satisfactorily as long as, upon report of a fault, the scheduler chooses n non-faulty processors. We provide a randomized protocol maximizing the expected number of faults the system can sustain before the occurrence of a crash. The optimality of the protocol is established by considering a closely related dual optimization problem. The game-theoretic technical difficulties that we solve in this paper are very general and encountered whenever proving the optimality of a randomized algorithm in parallel and distributed computation.
Optimality, reduction and collective motion
Justh, Eric W.; Krishnaprasad, P. S.
2015-01-01
The planar self-steering particle model of agents in a collective gives rise to dynamics on the N-fold direct product of SE(2), the rigid motion group in the plane. Assuming a connected, undirected graph of interaction between agents, we pose a family of symmetric optimal control problems with a coupling parameter capturing the strength of interactions. The Hamiltonian system associated with the necessary conditions for optimality is reducible to a Lie–Poisson dynamical system possessing interesting structure. In particular, the strong coupling limit reveals additional (hidden) symmetry, beyond the manifest one used in reduction: this enables explicit integration of the dynamics, and demonstrates the presence of a ‘master clock’ that governs all agents to steer identically. For finite coupling strength, we show that special solutions exist with steering controls proportional across the collective. These results suggest that optimality principles may provide a framework for understanding imitative behaviours observed in certain animal aggregations. PMID:27547087
Integrated solar energy system optimization
NASA Astrophysics Data System (ADS)
Young, S. K.
1982-11-01
The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.
Genetic Optimization of Optical Nanoantennas
NASA Astrophysics Data System (ADS)
Forestiere, Carlo; Pasquale, Alyssa; Capretti, Antonio; Lee, Sylvanus; Miano, Giovanni; Tamburrino, Antonello; Dal Negro, Luca
2012-02-01
Metal nanostructures can act as plasmonic nanoantennas (PNAs) due to their unique ability to concentrate the light over sub-wavelength spatial regions. However engineering the optimum PNA in terms of a given quality factor or objective function. We propose a novel design strategy of PNAs by coupling a genetic optimization (GA) tool to the analytical multi-particle Mie theory. The positions and radii of metallic nanosphere clusters are found by requiring maximum electric field enhancement at a given focus point. Within the optimization process we introduced several constraints in order to guarantee the physical realizability of the tailored nanostructure with electron-beam lithography (EBL). Our GA optimization results unveil the central role of the radiative coupling in the design of PNA and open up new exciting pathways in the engineering of metal nanostructures. Samples were fabricated using techniques and surface-enhancement Raman scattering measures were performed confirming the theoretical predictions.
Excitation optimization for damage detection
Bement, Matthew T; Bewley, Thomas R
2009-01-01
A technique is developed to answer the important question: 'Given limited system response measurements and ever-present physical limits on the level of excitation, what excitation should be provided to a system to make damage most detectable?' Specifically, a method is presented for optimizing excitations that maximize the sensitivity of output measurements to perturbations in damage-related parameters estimated with an extended Kalman filter. This optimization is carried out in a computationally efficient manner using adjoint-based optimization and causes the innovations term in the extended Kalman filter to be larger in the presence of estimation errors, which leads to a better estimate of the damage-related parameters in question. The technique is demonstrated numerically on a nonlinear 2 DOF system, where a significant improvement in the damage-related parameter estimation is observed.
Optimal segmentation and packaging process
Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.
1999-01-01
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.
Accelerating optimization by tracing valley
NASA Astrophysics Data System (ADS)
Li, Qing-Xiao; He, Rong-Qiang; Lu, Zhong-Yi
2016-06-01
We propose an algorithm to accelerate optimization when an objective function locally resembles a long narrow valley. In such a case, a conventional optimization algorithm usually wanders with too many tiny steps in the valley. The new algorithm approximates the valley bottom locally by a parabola that is obtained by fitting a set of successive points generated recently by a conventional optimization method. Then large steps are taken along the parabola, accompanied by fine adjustment to trace the valley bottom. The effectiveness of the new algorithm has been demonstrated by accelerating the Newton trust-region minimization method and the Levenberg-Marquardt method on the nonlinear fitting problem in exact diagonalization dynamical mean-field theory and on the classic minimization problem of the Rosenbrock's function. Many times speedup has been achieved for both problems, showing the high efficiency of the new algorithm.
Optimal shapes for best draining
NASA Astrophysics Data System (ADS)
Sherwood, J. D.
2009-11-01
The container shape that minimizes the volume of draining fluid remaining on the walls of the container after it has been emptied from its base is determined. The film of draining fluid is assumed to wet the walls of the container, and is sufficiently thin so that its curvature may be neglected. Surface tension is ignored. The initial value problem for the thickness of a film of Newtonian fluid is studied, and is shown to lead asymptotically to a similarity solution. From this, and from equivalent solutions for power-law fluids, the volume of the residual film is determined. The optimal container shape is not far from hemispherical, to minimize the surface area, but has a conical base to promote draining. The optimal shape for an axisymmetric mixing vessel, with a hole at the center of its base for draining, is also optimal when inverted in the manner of a washed wine glass inverted and left to drain.
Maneuver Optimization through Simulated Annealing
NASA Astrophysics Data System (ADS)
de Vries, W.
2011-09-01
We developed an efficient method for satellite maneuver optimization. It is based on a Monte Carlo (MC) approach in combination with Simulated Annealing. The former component enables us to consider all imaginable trajectories possible given the current satellite position and its available thrust, while the latter approach ensures that we reliably find the best global optimization solution. Furthermore, this optimization setup is eminently scalable. It runs efficiently on the current multi-core generation of desktop computers, but is equally at home on massively parallel high performance computers (HPC). The baseline method for desktops uses a modified two-body propagator that includes the lunar gravitational force, and corrects for nodal and apsidal precession. For the HPC environment, on the other hand, we can include all the necessary components for a full force-model propagation: higher gravitational moments, atmospheric drag, solar radiation pressure, etc. A typical optimization scenario involves an initial orbit and a destination orbit / trajectory, a time period under consideration, and an available amount of thrust. After selecting a particular optimization (e.g., least amount of fuel, shortest maneuver), the program will determine when and in what direction to burn by what amount. Since we are considering all possible trajectories, we are not constrained to any particular transfer method (e.g., Hohmann transfers). Indeed, in some cases gravitational slingshots around the Earth turn out to be the best result. The paper will describe our approach in detail, its complement of optimizations for single- and multi-burn sequences, and some in-depth examples. In particular, we highlight an example where it is used to analyze a sequence of maneuvers after the fact, as well as showcase its utility as a planning and analysis tool for future maneuvers.
Interaction prediction optimization in multidisciplinary design optimization problems.
Meng, Debiao; Zhang, Xiaoling; Huang, Hong-Zhong; Wang, Zhonglai; Xu, Huanwei
2014-01-01
The distributed strategy of Collaborative Optimization (CO) is suitable for large-scale engineering systems. However, it is hard for CO to converge when there is a high level coupled dimension. Furthermore, the discipline objectives cannot be considered in each discipline optimization problem. In this paper, one large-scale systems control strategy, the interaction prediction method (IPM), is introduced to enhance CO. IPM is utilized for controlling subsystems and coordinating the produce process in large-scale systems originally. We combine the strategy of IPM with CO and propose the Interaction Prediction Optimization (IPO) method to solve MDO problems. As a hierarchical strategy, there are a system level and a subsystem level in IPO. The interaction design variables (including shared design variables and linking design variables) are operated at the system level and assigned to the subsystem level as design parameters. Each discipline objective is considered and optimized at the subsystem level simultaneously. The values of design variables are transported between system level and subsystem level. The compatibility constraints are replaced with the enhanced compatibility constraints to reduce the dimension of design variables in compatibility constraints. Two examples are presented to show the potential application of IPO for MDO.
Optimal sensor placement in structural health monitoring using discrete optimization
NASA Astrophysics Data System (ADS)
Sun, Hao; Büyüköztürk, Oral
2015-12-01
The objective of optimal sensor placement (OSP) is to obtain a sensor layout that gives as much information of the dynamic system as possible in structural health monitoring (SHM). The process of OSP can be formulated as a discrete minimization (or maximization) problem with the sensor locations as the design variables, conditional on the constraint of a given sensor number. In this paper, we propose a discrete optimization scheme based on the artificial bee colony algorithm to solve the OSP problem after first transforming it into an integer optimization problem. A modal assurance criterion-oriented objective function is investigated to measure the utility of a sensor configuration in the optimization process based on the modal characteristics of a reduced order model. The reduced order model is obtained using an iterated improved reduced system technique. The constraint is handled by a penalty term added to the objective function. Three examples, including a 27 bar truss bridge, a 21-storey building at the MIT campus and the 610 m high Canton Tower, are investigated to test the applicability of the proposed algorithm to OSP. In addition, the proposed OSP algorithm is experimentally validated on a physical laboratory structure which is a three-story two-bay steel frame instrumented with triaxial accelerometers. Results indicate that the proposed method is efficient and can be potentially used in OSP in practical SHM.
Is optimism optimal? Functional causes of apparent behavioural biases.
Houston, Alasdair I; Trimmer, Pete C; Fawcett, Tim W; Higginson, Andrew D; Marshall, James A R; McNamara, John M
2012-02-01
We review the use of the terms 'optimism' and 'pessimism' to characterize particular types of behaviour in non-human animals. Animals can certainly behave as though they are optimistic or pessimistic with respect to specific motivations, as documented by an extensive range of examples in the literature. However, in surveying such examples we find that these terms are often poorly defined and are liable to lead to confusion. Furthermore, when considering behaviour within the framework of optimal decision theory using appropriate currencies, it is often misleading to describe animals as optimistic or pessimistic. There are two common misunderstandings. First, some apparent cases of biased behaviour result from misidentifying the currencies and pay-offs the animals should be maximising. Second, actions that do not maximise short-term pay-offs have sometimes been described as optimistic or pessimistic when in fact they are optimal in the long term; we show how such situations can be understood from the perspective of bandit models. Rather than describing suboptimal, unrealistic behaviour, the terms optimism and pessimism are better restricted to informal usage. Our review highlights the importance of choosing the relevant currency when attempting to predict the action of natural selection.
Optimal flow for brown trout: Habitat - prey optimization.
Fornaroli, Riccardo; Cabrini, Riccardo; Sartori, Laura; Marazzi, Francesca; Canobbio, Sergio; Mezzanotte, Valeria
2016-10-01
The correct definition of ecosystem needs is essential in order to guide policy and management strategies to optimize the increasing use of freshwater by human activities. Commonly, the assessment of the optimal or minimum flow rates needed to preserve ecosystem functionality has been done by habitat-based models that define a relationship between in-stream flow and habitat availability for various species of fish. We propose a new approach for the identification of optimal flows using the limiting factor approach and the evaluation of basic ecological relationships, considering the appropriate spatial scale for different organisms. We developed density-environment relationships for three different life stages of brown trout that show the limiting effects of hydromorphological variables at habitat scale. In our analyses, we found that the factors limiting the densities of trout were water velocity, substrate characteristics and refugia availability. For all the life stages, the selected models considered simultaneously two variables and implied that higher velocities provided a less suitable habitat, regardless of other physical characteristics and with different patterns. We used these relationships within habitat based models in order to select a range of flows that preserve most of the physical habitat for all the life stages. We also estimated the effect of varying discharge flows on macroinvertebrate biomass and used the obtained results to identify an optimal flow maximizing habitat and prey availability. PMID:27320735
Optimal flow for brown trout: Habitat - prey optimization.
Fornaroli, Riccardo; Cabrini, Riccardo; Sartori, Laura; Marazzi, Francesca; Canobbio, Sergio; Mezzanotte, Valeria
2016-10-01
The correct definition of ecosystem needs is essential in order to guide policy and management strategies to optimize the increasing use of freshwater by human activities. Commonly, the assessment of the optimal or minimum flow rates needed to preserve ecosystem functionality has been done by habitat-based models that define a relationship between in-stream flow and habitat availability for various species of fish. We propose a new approach for the identification of optimal flows using the limiting factor approach and the evaluation of basic ecological relationships, considering the appropriate spatial scale for different organisms. We developed density-environment relationships for three different life stages of brown trout that show the limiting effects of hydromorphological variables at habitat scale. In our analyses, we found that the factors limiting the densities of trout were water velocity, substrate characteristics and refugia availability. For all the life stages, the selected models considered simultaneously two variables and implied that higher velocities provided a less suitable habitat, regardless of other physical characteristics and with different patterns. We used these relationships within habitat based models in order to select a range of flows that preserve most of the physical habitat for all the life stages. We also estimated the effect of varying discharge flows on macroinvertebrate biomass and used the obtained results to identify an optimal flow maximizing habitat and prey availability.
Optimal singular control with applications to trajectory optimization
NASA Technical Reports Server (NTRS)
Vinh, N. X.
1979-01-01
The switching conditions are expressed explicitly in terms of the derivatives of the Hamiltonians at the two ends of the switching. A new expression of the Kelley-Contensou necessary condition for the optimality of a singular arc is given. Some examples illustrating the application of the theory are presented.
Thermodynamic Metrics and Optimal Paths
Sivak, David; Crooks, Gavin
2012-05-08
A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.
An optimal repartitioning decision policy
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Reynolds, P. F., Jr.
1986-01-01
A central problem to parallel processing is the determination of an effective partitioning of workload to processors. The effectiveness of any given partition is dependent on the stochastic nature of the workload. The problem of determining when and if the stochastic behavior of the workload has changed enough to warrant the calculation of a new partition is treated. The problem is modeled as a Markov decision process, and an optimal decision policy is derived. Quantification of this policy is usually intractable. A heuristic policy which performs nearly optimally is investigated empirically. The results suggest that the detection of change is the predominant issue in this problem.
Design optimization of transonic airfoils
NASA Technical Reports Server (NTRS)
Joh, C.-Y.; Grossman, B.; Haftka, R. T.
1991-01-01
Numerical optimization procedures were considered for the design of airfoils in transonic flow based on the transonic small disturbance (TSD) and Euler equations. A sequential approximation optimization technique was implemented with an accurate approximation of the wave drag based on the Nixon's coordinate straining approach. A modification of the Euler surface boundary conditions was implemented in order to efficiently compute design sensitivities without remeshing the grid. Two effective design procedures producing converged designs in approximately 10 global iterations were developed: interchanging the role of the objective function and constraint and the direct lift maximization with move limits which were fixed absolute values of the design variables.
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Configuration optimization of space structures
NASA Technical Reports Server (NTRS)
Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David
1991-01-01
The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.
Adaptive critics for dynamic optimization.
Kulkarni, Raghavendra V; Venayagamoorthy, Ganesh Kumar
2010-06-01
A novel action-dependent adaptive critic design (ACD) is developed for dynamic optimization. The proposed combination of a particle swarm optimization-based actor and a neural network critic is demonstrated through dynamic sleep scheduling of wireless sensor motes for wildlife monitoring. The objective of the sleep scheduler is to dynamically adapt the sleep duration to node's battery capacity and movement pattern of animals in its environment in order to obtain snapshots of the animal on its trajectory uniformly. Simulation results show that the sleep time of the node determined by the actor critic yields superior quality of sensory data acquisition and enhanced node longevity. PMID:20223635
Computational optimization and biological evolution.
Goryanin, Igor
2010-10-01
Modelling and optimization principles become a key concept in many biological areas, especially in biochemistry. Definitions of objective function, fitness and co-evolution, although they differ between biology and mathematics, are similar in a general sense. Although successful in fitting models to experimental data, and some biochemical predictions, optimization and evolutionary computations should be developed further to make more accurate real-life predictions, and deal not only with one organism in isolation, but also with communities of symbiotic and competing organisms. One of the future goals will be to explain and predict evolution not only for organisms in shake flasks or fermenters, but for real competitive multispecies environments.
CHP Installed Capacity Optimizer Software
2004-11-30
The CHP Installed Capacity Optimizer is a Microsoft Excel spreadsheet application that determines the most economic amount of capacity of distributed generation and thermal utilization equipment (e.g., absorption chillers) to install for any user-defined set of load and cost data. Installing the optimum amount of capacity is critical to the life-cycle economic viability of a distributed generation/cooling heat and power (CHP) application. Using advanced optimization algorithms, the software accesses the loads, utility tariffs, equipment costs,more » etc., and provides to the user the most economic amount of system capacity to install.« less
Optimal Retirement with Increasing Longevity*
Bloom, David E.; Canning, David; Moore, Michael
2014-01-01
We develop an optimizing life-cycle model of retirement with perfect capital markets. We show that longer healthy life expectancy usually leads to later retirement, but with an elasticity less than unity. We calibrate our model using data from the US and find that, over the last century, the effect of rising incomes, which promote early retirement, has dominated the effect of rising lifespans. Our model predicts continuing declines in the optimal retirement age, despite rising life expectancy, provided the rate of real wage growth remains as high as in the last century. PMID:24954970
Enhancing Polyhedral Relaxations for Global Optimization
ERIC Educational Resources Information Center
Bao, Xiaowei
2009-01-01
During the last decade, global optimization has attracted a lot of attention due to the increased practical need for obtaining global solutions and the success in solving many global optimization problems that were previously considered intractable. In general, the central question of global optimization is to find an optimal solution to a given…
Research on optimization-based design
NASA Astrophysics Data System (ADS)
Balling, R. J.; Parkinson, A. R.; Free, J. C.
1989-04-01
Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.
Modular optimization code package: MOZAIK
NASA Astrophysics Data System (ADS)
Bekar, Kursat B.
This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the
Shape Optimization of Swimming Sheets
Wilkening, J.; Hosoi, A.E.
2005-03-01
The swimming behavior of a flexible sheet which moves by propagating deformation waves along its body was first studied by G. I. Taylor in 1951. In addition to being of theoretical interest, this problem serves as a useful model of the locomotion of gastropods and various micro-organisms. Although the mechanics of swimming via wave propagation has been studied extensively, relatively little work has been done to define or describe optimal swimming by this mechanism.We carry out this objective for a sheet that is separated from a rigid substrate by a thin film of viscous Newtonian fluid. Using a lubrication approximation to model the dynamics, we derive the relevant Euler-Lagrange equations to optimize swimming speed and efficiency. The optimization equations are solved numerically using two different schemes: a limited memory BFGS method that uses cubic splines to represent the wave profile, and a multi-shooting Runge-Kutta approach that uses the Levenberg-Marquardt method to vary the parameters of the equations until the constraints are satisfied. The former approach is less efficient but generalizes nicely to the non-lubrication setting. For each optimization problem we obtain a one parameter family of solutions that becomes singular in a self-similar fashion as the parameter approaches a critical value. We explore the validity of the lubrication approximation near this singular limit by monitoring higher order corrections to the zeroth order theory and by comparing the results with finite element solutions of the full Stokes equations.
Optimal timing in biological processes
Williams, B.K.; Nichols, J.D.
1984-01-01
A general approach for obtaining solutions to a class of biological optimization problems is provided. The general problem is one of determining the appropriate time to take some action, when the action can be taken only once during some finite time frame. The approach can also be extended to cover a number of other problems involving animal choice (e.g., mate selection, habitat selection). Returns (assumed to index fitness) are treated as random variables with time-specific distributions, and can be either observable or unobservable at the time action is taken. In the case of unobservable returns, the organism is assumed to base decisions on some ancillary variable that is associated with returns. Optimal policies are derived for both situations and their properties are discussed. Various extensions are also considered, including objective functions based on functions of returns other than the mean, nonmonotonic relationships between the observable variable and returns; possible death of the organism before action is taken; and discounting of future returns. A general feature of the optimal solutions for many of these problems is that an organism should be very selective (i.e., should act only when returns or expected returns are relatively high) at the beginning of the time frame and should become less and less selective as time progresses. An example of the application of optimal timing to a problem involving the timing of bird migration is discussed, and a number of other examples for which the approach is applicable are described.
Wind Electrolysis: Hydrogen Cost Optimization
Saur, G.; Ramsden, T.
2011-05-01
This report describes a hydrogen production cost analysis of a collection of optimized central wind based water electrolysis production facilities. The basic modeled wind electrolysis facility includes a number of low temperature electrolyzers and a co-located wind farm encompassing a number of 3MW wind turbines that provide electricity for the electrolyzer units.
Optimization for training neural nets.
Barnard, E
1992-01-01
Various techniques of optimizing criterion functions to train neural-net classifiers are investigated. These techniques include three standard deterministic techniques (variable metric, conjugate gradient, and steepest descent), and a new stochastic technique. It is found that the stochastic technique is preferable on problems with large training sets and that the convergence rates of the variable metric and conjugate gradient techniques are similar.
Optimizing Learning Through Effective Management.
ERIC Educational Resources Information Center
Mills, Earl S.; Wood, Duane R.
A model of an instructional program which uses principles of effective management to optimize learning for adult learners is described. The model is a result of the authors' work with the Institute for Personal and Career Development which is responsible for the external degree program of Central Michigan University. Most adult learners have…
Shape optimization of corrugated airfoils
NASA Astrophysics Data System (ADS)
Jain, Sambhav; Bhatt, Varun Dhananjay; Mittal, Sanjay
2015-12-01
The effect of corrugations on the aerodynamic performance of a Mueller C4 airfoil, placed at a 5° angle of attack and Re=10{,}000, is investigated. A stabilized finite element method is employed to solve the incompressible flow equations in two dimensions. A novel parameterization scheme is proposed that enables representation of corrugations on the surface of the airfoil, and their spontaneous appearance in the shape optimization loop, if indeed they improve aerodynamic performance. Computations are carried out for different location and number of corrugations, while holding their height fixed. The first corrugation causes an increase in lift and drag. Each of the later corrugations leads to a reduction in drag. Shape optimization of the Mueller C4 airfoil is carried out using various objective functions and optimization strategies, based on controlling airfoil thickness and camber. One of the optimal shapes leads to 50 % increase in lift coefficient and 23 % increase in aerodynamic efficiency compared to the Mueller C4 airfoil.
Robust, Optimal Subsonic Airfoil Shapes
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2014-01-01
A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.
Optimizing Requirements Decisions with KEYS
NASA Technical Reports Server (NTRS)
Jalali, Omid; Menzies, Tim; Feather, Martin
2008-01-01
Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.
Cochlear implant optimized noise reduction.
Mauger, Stefan J; Arora, Komal; Dawson, Pam W
2012-12-01
Noise-reduction methods have provided significant improvements in speech perception for cochlear implant recipients, where only quality improvements have been found in hearing aid recipients. Recent psychoacoustic studies have suggested changes to noise-reduction techniques specifically for cochlear implants, due to differences between hearing aid recipient and cochlear implant recipient hearing. An optimized noise-reduction method was developed with significantly increased temporal smoothing of the signal-to-noise ratio estimate and a more aggressive gain function compared to current noise-reduction methods. This optimized noise-reduction algorithm was tested with 12 cochlear implant recipients over four test sessions. Speech perception was assessed through speech in noise tests with three noise types; speech-weighted noise, 20-talker babble and 4-talker babble. A significant speech perception improvement using optimized noise reduction over standard processing was found in babble noise and speech-weighted noise and over a current noise-reduction method in speech-weighted noise. Speech perception in quiet was not degraded. Listening quality testing for noise annoyance and overall preference found significant improvements over the standard processing and over a current noise-reduction method in speech-weighted and babble noise types. This optimized method has shown significant speech perception and quality improvements compared to the standard processing and a current noise-reduction method.
Soliton molecules: Experiments and optimization
Mitschke, Fedor
2014-10-06
Stable compound states of several fiber-optic solitons have recently been demonstrated. In the first experiment their shape was approximated, for want of a better description, by a sum of Gaussians. Here we discuss an optimization strategy which helps to find preferable shapes so that the generation of radiative background is reduced.
Aspects of illumination system optimization
NASA Astrophysics Data System (ADS)
Koshel, R. John
2004-09-01
This paper focuses on the facets of illumination system optimization, in particular parameterization of objects, the number of rays that must be traced to sample properly its properties, and the optimization algorithm with the associated merit function designation. Non-interference ensures that the parameterized objects do not erroneously intersect each other or leave gaps during the steps of the optimization procedure. The required number of rays is based on a model developed for television cameras during their initial days of development. Using signal to noise ratio, it provides the number of rays based on the desired contrast, feature size, and allowed error probability. A lightpipe is used to highlight the nuances of this model. The utility of using system symmetry to increase ray count is also discussed. A modified simplex method of optimization is described. This algorithm provides quicker convergence than the standard simplex method, while it is also robust, accurate, and convergent. A previous example using a compound parabolic concentrator highlights the utility of this improvement.
Acoustic design by topology optimization
NASA Astrophysics Data System (ADS)
Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole
2008-11-01
To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.
Local optimization of energy systems
Lozano, M.A.; Valero, A.; Serra, L.
1996-12-31
Many thermal systems are very complex due to the number of components and/or its strong interdependence. This complexity makes difficult the optimization of the system design and operation. The theory of Exergetic Cost is based on concepts such as resources, structure, efficiency and purpose (belonging to any theory of production) and on the Second Law. This paper will show how it is possible to obtain from the theory of exergetic cost the marginal costs (Lagrange multipliers) of local resources being consumed by a component. This paper also shows the advantage of the proposed Theory of Perturbations when describing the complexity of structural interactions in a straightforward way. This theory allows to formulate simple procedures for local optimization of components in a plant. Finally, strategies for optimization of complex systems are shown. They are based in the sequential optimization from component to component. This clear and efficient method comes form the fact that the authors have now an operative application of the Thermoeconomic Isolation Principle. This is applied here to thermal power plants.
Optimal Foraging in Semantic Memory
ERIC Educational Resources Information Center
Hills, Thomas T.; Jones, Michael N.; Todd, Peter M.
2012-01-01
Do humans search in memory using dynamic local-to-global search strategies similar to those that animals use to forage between patches in space? If so, do their dynamic memory search policies correspond to optimal foraging strategies seen for spatial foraging? Results from a number of fields suggest these possibilities, including the shared…
Optimal control of native predators
Martin, Julien; O'Connell, Allan F.; Kendall, William L.; Runge, Michael C.; Simons, Theodore R.; Waldstein, Arielle H.; Schulte, Shiloh A.; Converse, Sarah J.; Smith, Graham W.; Pinion, Timothy; Rikard, Michael; Zipkin, Elise F.
2010-01-01
We apply decision theory in a structured decision-making framework to evaluate how control of raccoons (Procyon lotor), a native predator, can promote the conservation of a declining population of American Oystercatchers (Haematopus palliatus) on the Outer Banks of North Carolina. Our management objective was to maintain Oystercatcher productivity above a level deemed necessary for population recovery while minimizing raccoon removal. We evaluated several scenarios including no raccoon removal, and applied an adaptive optimization algorithm to account for parameter uncertainty. We show how adaptive optimization can be used to account for uncertainties about how raccoon control may affect Oystercatcher productivity. Adaptive management can reduce this type of uncertainty and is particularly well suited for addressing controversial management issues such as native predator control. The case study also offers several insights that may be relevant to the optimal control of other native predators. First, we found that stage-specific removal policies (e.g., yearling versus adult raccoon removals) were most efficient if the reproductive values among stage classes were very different. Second, we found that the optimal control of raccoons would result in higher Oystercatcher productivity than the minimum levels recommended for this species. Third, we found that removing more raccoons initially minimized the total number of removals necessary to meet long term management objectives. Finally, if for logistical reasons managers cannot sustain a removal program by removing a minimum number of raccoons annually, managers may run the risk of creating an ecological trap for Oystercatchers.
Optimal Preprocessing Of GPS Data
NASA Technical Reports Server (NTRS)
Wu, Sien-Chong; Melbourne, William G.
1994-01-01
Improved technique for preprocessing data from Global Positioning System (GPS) receiver reduces processing time and number of data to be stored. Technique optimal in sense it maintains strength of data. Also sometimes increases ability to resolve ambiguities in numbers of cycles of received GPS carrier signals.
Optimal Preprocessing Of GPS Data
NASA Technical Reports Server (NTRS)
Wu, Sien-Chong; Melbourne, William G.
1994-01-01
Improved technique for preprocessing data from Global Positioning System receiver reduces processing time and number of data to be stored. Optimal in sense that it maintains strength of data. Also increases ability to resolve ambiguities in numbers of cycles of received GPS carrier signals.
Triangle interferometer and its optimization.
NASA Astrophysics Data System (ADS)
Zhang, Hui; Xu, Jiayan; Xiao, Jinhong; Wang, Zhengming
1991-12-01
The optimal configuration of the triangle for determining the ERP is an equilateral triangle or an isosceles right triangle. Analysing the data observed by Connected Element Interferometer at Green Bank shows that the accuracy of the ERP solved from three baselines' observations is much better than that of those solved from only two.
Optimization in Bilingual Language Use
ERIC Educational Resources Information Center
Bhatt, Rakesh M.
2013-01-01
Pieter Muysken's keynote paper, "Language contact outcomes as a result of bilingual optimization strategies", undertakes an ambitious project to theoretically unify different empirical outcomes of language contact, for instance, SLA, pidgins and Creoles, and code-switching. Muysken has dedicated a life-time to researching, rather…
Optimizing use of library technology.
Wink, Diane M; Killingsworth, Elizabeth K
2011-01-01
In this bimonthly series, the author examines how nurse educators can use the Internet and Web-based computer technologies such as search, communication, collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes optimizing the use of library technology.
Optimal deployment of solar index
Croucher, Matt
2010-11-15
There is a growing trend, generally caused by state-specific renewable portfolio standards, to increase the importance of renewable electricity generation within generation portfolios. While RPS assist with determining the composition of generation they do not, for the most part, dictate the location of generation. Using data from various public sources, the authors create an optimal index for solar deployment. (author)
Global optimization of digital circuits
NASA Astrophysics Data System (ADS)
Flandera, Richard
1991-12-01
This thesis was divided into two tasks. The first task involved developing a parser which could translate a behavioral specification in Very High-Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) into the format used by an existing digital circuit optimization tool, Boolean Reasoning In Scheme (BORIS). Since this tool is written in Scheme, a dialect of Lisp, the parser was also written in Scheme. The parser was implemented is Artez's modification of Earley's Algorithm. Additionally, a VHDL tokenizer was implemented in Scheme and a portion of the VHDL grammar was converted into the format which the parser uses. The second task was the incorporation of intermediate functions into BORIS. The existing BORIS contains a recursive optimization system that optimizes digital circuits by using circuit outputs as inputs into other circuits. Intermediate functions provide a greater selection of functions to be used as circuits inputs. Using both intermediate functions and output functions, the costs of the circuits in the test set were reduced by 43 percent. This is a 10 percent reduction when compared to the existing recursive optimization system. Incorporating intermediate functions into BORIS required the development of an intermediate-function generator and a set of control methods to keep the computation time from increasing exponentially.
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Four-body trajectory optimization
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1973-01-01
A collection of typical three-body trajectories from the L1 libration point on the sun-earth line to the earth is presented. These trajectories in the sun-earth system are grouped into four distinct families which differ in transfer time and delta V requirements. Curves showing the variations of delta V with respect to transfer time, and typical two and three-impulse primer vector histories, are included. The development of a four-body trajectory optimization program to compute fuel optimal trajectories between the earth and a point in the sun-earth-moon system are also discussed. Methods for generating fuel optimal two-impulse trajectories which originate at the earth or a point in space, and fuel optimal three-impulse trajectories between two points in space, are presented. A brief qualitative comparison of these methods is given. An example of a four-body two-impulse transfer from the Li libration point to the earth is included.
Optimization of Actuating Origami Networks
NASA Astrophysics Data System (ADS)
Buskohl, Philip; Fuchi, Kazuko; Bazzan, Giorgio; Joo, James; Gregory, Reich; Vaia, Richard
2015-03-01
Origami structures morph between 2D and 3D conformations along predetermined fold lines that efficiently program the form, function and mobility of the structure. By leveraging design concepts from action origami, a subset of origami art focused on kinematic mechanisms, reversible folding patterns for applications such as solar array packaging, tunable antennae, and deployable sensing platforms may be designed. However, the enormity of the design space and the need to identify the requisite actuation forces within the structure places a severe limitation on design strategies based on intuition and geometry alone. The present work proposes a topology optimization method, using truss and frame element analysis, to distribute foldline mechanical properties within a reference crease pattern. Known actuating patterns are placed within a reference grid and the optimizer adjusts the fold stiffness of the network to optimally connect them. Design objectives may include a target motion, stress level, or mechanical energy distribution. Results include the validation of known action origami structures and their optimal connectivity within a larger network. This design suite offers an important step toward systematic incorporation of origami design concepts into new, novel and reconfigurable engineering devices. This research is supported under the Air Force Office of Scientific Research (AFOSR) funding, LRIR 13RQ02COR.
Multicriteria optimization informed VMAT planning.
Chen, Huixiao; Craft, David L; Gierga, David P
2014-01-01
We developed a patient-specific volumetric-modulated arc therapy (VMAT) optimization procedure using dose-volume histogram (DVH) information from multicriteria optimization (MCO) of intensity-modulated radiotherapy (IMRT) plans. The study included 10 patients with prostate cancer undergoing standard fractionation treatment, 10 patients with prostate cancer undergoing hypofractionation treatment, and 5 patients with head/neck cancer. MCO-IMRT plans using 20 and 7 treatment fields were generated for each patient on the RayStation treatment planning system (clinical version 2.5, RaySearch Laboratories, Stockholm, Sweden). The resulting DVH of the 20-field MCO-IMRT plan for each patient was used as the reference DVH, and the extracted point values of the resulting DVH of the MCO-IMRT plan were used as objectives and constraints for VMAT optimization. Weights of objectives or constraints of VMAT optimization or both were further tuned to generate the best match with the reference DVH of the MCO-IMRT plan. The final optimal VMAT plan quality was evaluated by comparison with MCO-IMRT plans based on homogeneity index, conformity number of planning target volume, and organ at risk sparing. The influence of gantry spacing, arc number, and delivery time on VMAT plan quality for different tumor sites was also evaluated. The resulting VMAT plan quality essentially matched the 20-field MCO-IMRT plan but with a shorter delivery time and less monitor units. VMAT plan quality of head/neck cancer cases improved using dual arcs whereas prostate cases did not. VMAT plan quality was improved by fine gantry spacing of 2 for the head/neck cancer cases and the hypofractionation-treated prostate cancer cases but not for the standard fractionation-treated prostate cancer cases. MCO-informed VMAT optimization is a useful and valuable way to generate patient-specific optimal VMAT plans, though modification of the weights of objectives or constraints extracted from resulting DVH of MCO-IMRT or
Optimal control of hydroelectric facilities
NASA Astrophysics Data System (ADS)
Zhao, Guangzhi
This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the
Multicriteria optimization informed VMAT planning
Chen, Huixiao; Craft, David L.; Gierga, David P.
2014-04-01
We developed a patient-specific volumetric-modulated arc therapy (VMAT) optimization procedure using dose-volume histogram (DVH) information from multicriteria optimization (MCO) of intensity-modulated radiotherapy (IMRT) plans. The study included 10 patients with prostate cancer undergoing standard fractionation treatment, 10 patients with prostate cancer undergoing hypofractionation treatment, and 5 patients with head/neck cancer. MCO-IMRT plans using 20 and 7 treatment fields were generated for each patient on the RayStation treatment planning system (clinical version 2.5, RaySearch Laboratories, Stockholm, Sweden). The resulting DVH of the 20-field MCO-IMRT plan for each patient was used as the reference DVH, and the extracted point values of the resulting DVH of the MCO-IMRT plan were used as objectives and constraints for VMAT optimization. Weights of objectives or constraints of VMAT optimization or both were further tuned to generate the best match with the reference DVH of the MCO-IMRT plan. The final optimal VMAT plan quality was evaluated by comparison with MCO-IMRT plans based on homogeneity index, conformity number of planning target volume, and organ at risk sparing. The influence of gantry spacing, arc number, and delivery time on VMAT plan quality for different tumor sites was also evaluated. The resulting VMAT plan quality essentially matched the 20-field MCO-IMRT plan but with a shorter delivery time and less monitor units. VMAT plan quality of head/neck cancer cases improved using dual arcs whereas prostate cases did not. VMAT plan quality was improved by fine gantry spacing of 2 for the head/neck cancer cases and the hypofractionation-treated prostate cancer cases but not for the standard fractionation–treated prostate cancer cases. MCO-informed VMAT optimization is a useful and valuable way to generate patient-specific optimal VMAT plans, though modification of the weights of objectives or constraints extracted from resulting DVH of MCO
Optimal load shedding and restoration
NASA Astrophysics Data System (ADS)
Xu, Ding
Load shedding is an emergency control action in power systems that can save systems from a wide-area blackout. Underfrequency load shedding, steady state load shedding, and voltage load shedding are widely used in power systems. These methods utilize either the steady state model or a simplified dynamic model to represent a power systems. In this dissertation, a general optimal load shedding method that considers both the dynamic process and load distribution is proposed. The unfavorable load shedding is then formulated as an optimization problem with the objective function of cost minimization. This objective function is subjected to system, security, and operation constraints. The entire problem becomes a question of optimization with differential and nonlinear equations as constraints. To solve this problem, discretization is used to change the differential equations into algebraic equations. The original problem is thus reformulated as an optimization problem and can be solved by a standard mathematical program. The general idea is then applied to traditional power systems, deregulated power systems, power systems with distributed generation, and load restoration. In the traditional power system, the method shows that governor action, generation dynamic, disturbance location, and economic factors can be taken into consideration. In the deregulated power system, two power market models are developed and incorporated into the load shedding scheme. In power systems with multiple distributed generations, the different cases of disturbances are analyzed and models of different distributed generation developed. The general idea is then applied. Finally, the load restoration problem is studied, and it is proposed that an optimization method be applied to it. This dissertation provides a comprehensive solution for load shedding problem in power systems. The models developed in this research can also be used to study other power system problems.
Optimization of multi-constrained structures based on optimality criteria
NASA Technical Reports Server (NTRS)
Rizzi, P.
1976-01-01
A weight-reduction algorithm is developed for the optimal design of structures subject to several multibehavioral inequality constraints. The structural weight is considered to depend linearly on the design variables. The algorithm incorporates a simple recursion formula derived from the Kuhn-Tucker necessary conditions for optimality, associated with a procedure to delete nonactive constraints based on the Gauss-Seidel iterative method for linear systems. A number of example problems is studied, including typical truss structures and simplified wings subject to static loads and with constraints imposed on stresses and displacements. For one of the latter structures, constraints on the fundamental natural frequency and flutter speed are also imposed. The results obtained show that the method is fast, efficient, and general when compared to other competing techniques. Extensions to the generality of the method to include equality constraints and nonlinear merit functions is discussed.
Optimal feeding vs. optimal swimming of model ciliates
NASA Astrophysics Data System (ADS)
Michelin, Sebastien; Lauga, Eric
2011-11-01
To swim at low Reynolds numbers, micro-organisms create flow fields that modify the transport of nutrients around them, thereby impacting their feeding rate. When the nutrient is a passive scalar, the feeding rate of a given micro-swimmer greatly varies with the Péclet number (Pe) a relative measure of advection and diffusion in the nutrient transport, that strongly depends on the nutrient species considered. Using an axisymmetric envelope model for ciliary locomotion and adjoint-based optimization, we determine the swimming (or possibly non-swimming) strokes maximizing the nutrient uptake by the micro-swimmer for a given energy cost. We show that, unlike the feeding rate, this optimal feeding stroke is essentially independent of the Péclet number (and, therefore, of the nutrient considered) and is identical to the stroke with maximum swimming efficiency.
Dynamic optimization identifies optimal programmes for pathway regulation in prokaryotes.
Bartl, Martin; Kötzing, Martin; Schuster, Stefan; Li, Pu; Kaleta, Christoph
2013-01-01
To survive in fluctuating environmental conditions, microorganisms must be able to quickly react to environmental challenges by upregulating the expression of genes encoding metabolic pathways. Here we show that protein abundance and protein synthesis capacity are key factors that determine the optimal strategy for the activation of a metabolic pathway. If protein abundance relative to protein synthesis capacity increases, the strategies shift from the simultaneous activation of all enzymes to the sequential activation of groups of enzymes and finally to a sequential activation of individual enzymes along the pathway. In the case of pathways with large differences in protein abundance, even more complex pathway activation strategies with a delayed activation of low abundance enzymes and an accelerated activation of high abundance enzymes are optimal. We confirm the existence of these pathway activation strategies as well as their dependence on our proposed constraints for a large number of metabolic pathways in several hundred prokaryotes.
Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, Brian
2013-01-01
The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.
Modal test optimization using VETO (Virtual Environment for Test Optimization)
Klenke, S.E.; Reese, G.M.; Schoof, L.A.; Shierling, C.L.
1995-12-01
We present a software environment integrating analysis and test based models to support optimal modal test design through a Virtual Environment for Test Optimization (VETO). The VETO assists analysis and test engineers in maximizing the value of each modal test. It is particularly advantageous for structural dynamics model reconciliation applications. The VETO enables an engineer to interact with a finite element model of a test object to optimally place sensors and exciters and to investigate the selection of-data acquisition parameters needed to conduct a complete modal survey. Additionally, the user can evaluate the use of different types of instrumentation such as filters, amplifiers and transducers for which models are available in the VETO. The dynamic response of most of the virtual instruments (including the device under test) are modeled in the state space domain. Design of modal excitation levels and appropriate test instrumentation are facilitated by the VETO`s ability to simulate such features as unmeasured external inputs, A/D quantization effects, and electronic noise. Measures of the quality of the experimental design, including the Modal Assurance Criterion, and the Normal Mode indicator Function are available. The VETO also integrates tools such as Effective Independence and minamac to assist in selection of optimal sensor locations. The software is designed about three distinct modules: (1) a main controller and GUI written in C++, (2) a visualization model, taken from FEAVR, running under AVS, and (3) a state space model and time integration module, built in SIMULINK. These modules are designed to run as separate processes on interconnected machines. MATLAB`s external interface library is used to provide transparent, bidirectional communication between the controlling program and the computational engine where all the time integration is performed.
Model test optimization using the virtual environment for test optimization
Klenke, S.E.; Reese, G.M.; Schoof, L.A.; Shierling, C.
1995-11-01
We present a software environment integrating analysis and test-based models to support optimal modal test design through a Virtual Environment for Test Optimization (VETO). The VETO assists analysis and test engineers to maximize the value of each modal test. It is particularly advantageous for structural dynamics model reconciliation applications. The VETO enables an engineer to interact with a finite element model of a test object to optimally place sensors and exciters and to investigate the selection of data acquisition parameters needed to conduct a complete modal survey. Additionally, the user can evaluate the use of different types of instrumentation such as filters, amplifiers and transducers for which models are available in the VETO. The dynamic response of most of the virtual instruments (including the device under test) are modeled in the state space domain. Design of modal excitation levels and appropriate test instrumentation are facilitated by the VETO`s ability to simulate such features as unmeasured external inputs, A/D quantization effects, and electronic noise. Measures of the quality of the experimental design, including the Modal Assurance Criterion, and the Normal Mode Indicator Function are available. The VETO also integrates tools such as Effective Independence and minamac to assist in selection of optimal sensor locations. The software is designed about three distinct modules: (1) a main controller and GUI written in C++, (2) a visualization model, taken from FEAVR, running under AVS, and (3) a state space model and time integration module built in SIMULINK. These modules are designed to run as separate processes on interconnected machines.
A novel bee swarm optimization algorithm for numerical function optimization
NASA Astrophysics Data System (ADS)
Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush
2010-10-01
The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.
Modal test optimization using VETO (Virtual Environment for Test Optimization)
Klenke, S.E.; Reese, G.M.; Schoof, L.A.; Shierling, C.
1996-01-01
We present a software environment integrating analysis and test-based models to support optimal modal test design through a Virtual Environment for Test Optimization (VETO). A goal in developing this software tool is to provide test and analysis organizations with a capability of mathematically simulating the complete test environment in software. Derived models of test equipment, instrumentation and hardware can be combined within the VETO to provide the user with a unique analysis and visualization capability to evaluate new and existing test methods. The VETO assists analysis and test engineers in maximizing the value of each modal test. It is particularly advantageous for structural dynamics model reconciliation applications. The VETO enables an engineer to interact with a finite element model of a test object to optimally place sensors and exciters and to investigate the selection of data acquisition parameters needed to conduct a complete modal survey. Additionally, the user can evaluate the use of different types of instrumentation such as filters, amplifiers and transducers for which models are available in the VETO. The dynamic response of most of the virtual instruments (including the device under test) is modeled in the state space domain. Design of modal excitation levels and appropriate test instrumentation are facilitated by the VETO`s ability to simulate such features as unmeasured external inputs, A/D quantization effects, and electronic noise. Measures of the quality of the experimental design, including the Modal Assurance Criterion, and the Normal Mode Indicator Function are available.
Reentry trajectory optimization and control
NASA Astrophysics Data System (ADS)
Strohmaier, P.; Kiefer, A.; Burkhardt, D.; Horn, K.
1990-06-01
There are several possible methods to increase the cross range capability of a winged reentry vehicle, for instance, skip trajectories, a powered cruise phase, or high lift/drag ratio flight. However, most of these alternative descent strategies have not yet been investigated sufficiently with respect to aero-thermodynamic effects and the design of the thermal protection system. This problem is treated by two different means. First, a nominal reentry trajectory is generated based on a phase concept, and then the same problem is again solved using a numerical optimization code to determine the control functions. The nominal reentry trajectory design presented first subdivides the total reentry trajectory into several segments with partially constant control/state parameters such as maximum heat flux and deceleration. The optimal conditions for a given segment can then be selected. In contrast, the parameterized optimization code selects the control functions freely. Both approaches consider a mass point simulation which uses realistic model assumptions for atmosphere, earth and gravity. Likewise, both approaches satisfy all flight regime limitations and boundary conditions such as thermal constraints throughout the flight path and specified speed and altitude at the final time. For the optimization of high cross reentry trajectories the cross range per total absorbed heat represents an appropriate cost function. The optimization code delivers quite a different flight strategy than that usually generated by the nominal reentry design program, first flying longer along the temperature boundary at highest possible angle of attack (AOAs) (utilizing higher average turn rates), and afterwards performing flare-dive segments to reduce heat flux and to increase range. Finally, the aspect of guiding the nominal or optimized reentry trajectory during a cross range flight is considered. The vertical guidance is performed with both angles of attack and roll angle control. The
Integrated multidisciplinary design optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.
Chopped random-basis quantum optimization
Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone
2011-08-15
In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
Design of optimal systolic arrays
Li, G.J.; Wah, B.W.
1985-01-01
Conventional design of systolic arrays is based on the mapping of an algorithm onto an interconnection of processing elements in a VLSI chip. This mapping is done in an ad hoc manner, and the resulting configuration usually represents a feasible but suboptimal design. In this paper, systolic arrays are characterized by three classes of parameters: the velocities of data flows, the spatial distributions of data, and the periods of computation. By relating these parameters in constraint equations that govern the correctness of the design, the design is formulated into an optimization problem. The size of the search space is a polynomial of the problem size, and a methodology to systematically search and reduce this space and to obtain the optimal design is proposed. Some examples of applying the method, including matrix multiplication, finite impulse response filtering, deconvolution, and triangular-matrix inversion, are given. 30 references.
Process optimization in optical fabrication
NASA Astrophysics Data System (ADS)
Faehnle, Oliver
2016-03-01
Predictable and stable fabrication processes are essential for reliable cost and quality management in optical fabrication technology. This paper reports on strategies to generate and control optimum sets of process parameters for, e.g., subaperture polishing of small optics (featuring clear apertures smaller than 2 mm). Emphasis is placed on distinguishing between machine and process optimization, demonstrating that it is possible to set up the ductile mode grinding process by means other than controlling critical depth of cut. Finally, a recently developed in situ testing technique is applied to monitor surface quality on-machine while abrasively working the surface under test enabling an online optimization of polishing processes eventually minimizing polishing time and fabrication cost.
Topology optimization of piezoelectric nanostructures
NASA Astrophysics Data System (ADS)
Nanthakumar, S. S.; Lahmer, Tom; Zhuang, Xiaoying; Park, Harold S.; Rabczuk, Timon
2016-09-01
We present an extended finite element formulation for piezoelectric nanobeams and nanoplates that is coupled with topology optimization to study the energy harvesting potential of piezoelectric nanostructures. The finite element model for the nanoplates is based on the Kirchoff plate model, with a linear through the thickness distribution of electric potential. Based on the topology optimization, the largest enhancements in energy harvesting are found for closed circuit boundary conditions, though significant gains are also found for open circuit boundary conditions. Most interestingly, our results demonstrate the competition between surface elasticity, which reduces the energy conversion efficiency, and surface piezoelectricity, which enhances the energy conversion efficiency, in governing the energy harvesting potential of piezoelectric nanostructures.
[Optimization of radiological scoliosis assessment].
Enríquez, Goya; Piqueras, Joaquim; Catalá, Ana; Oliva, Glòria; Ruiz, Agustí; Ribas, Montserrat; Duran, Carmina; Rodrigo, Carlos; Rodríguez, Eugenia; Garriga, Victoria; Maristany, Teresa; García-Fontecha, César; Baños, Joan; Muchart, Jordi; Alava, Fernando
2014-07-01
Most scoliosis are idiopathic (80%) and occur more frequently in adolescent girls. Plain radiography is the imaging method of choice, both for the initial study and follow-up studies but has the disadvantage of using ionizing radiation. The breasts are exposed to x-ray along these repeated examinations. The authors present a range of recommendations in order to optimize radiographic exam technique for both conventional and digital x-ray settings to prevent unnecessary patients' radiation exposure and to reduce the risk of breast cancer in patients with scoliosis. With analogue systems, leaded breast protectors should always be used, and with any radiographic equipment, analog or digital radiography, the examination should be performed in postero-anterior projection and optimized low-dose techniques. The ALARA (as low as reasonable achievable) rule should always be followed to achieve diagnostic quality images with the lowest feasible dose. PMID:25128362
Optimal protocols for nonlocality distillation
Hoeyer, Peter; Rashid, Jibran
2010-10-15
Forster et al. recently showed that weak nonlocality can be amplified by giving the first protocol that distills a class of nonlocal boxes (NLBs) [Phys. Rev. Lett. 102, 120401 (2009)] We first show that their protocol is optimal among all nonadaptive protocols. We next consider adaptive protocols. We show that the depth-2 protocol of Allcock et al. [Phys. Rev. A 80, 062107 (2009)] performs better than previously known adaptive depth-2 protocols for all symmetric NLBs. We present a depth-3 protocol that extends the known region of distillable NLBs. We give examples of NLBs for which each of the Forster et al., the Allcock et al., and our protocols perform best. The understanding we develop is that there is no single optimal protocol for NLB distillation. The choice of which protocol to use depends on the noise parameters for the NLB.
Optimal design of airlift fermenters
Moresi, M.
1981-11-01
In this article a modeling of a draft-tube airlift fermenter (ALF) based on perfect back-mixing of liquid and plugflow for gas bubbles has been carried out to optimize the design and operation of fermentation units at different working capacities. With reference to a whey fermentation by yeasts the economic optimization has led to a slim ALF with an aspect ratio of about 15. As far as power expended per unit of oxygen transfer is concerned, the responses of the model are highly influenced by kLa. However, a safer use of the model has been suggested in order to assess the feasibility of the fermentation process under study. (Refs. 39).
Mullin, Gerard E
2010-12-01
Since the beginning of time, we have been searching for diets that satisfy our palates while simultaneously optimizing health and well-being. Every year, there are hundreds of new diet books on the market that make a wide range of promises but rarely deliver. Unfortunately, consumers are gullible and believe much of the marketing hype because they are desperately seeking ways to maximize their health. As a result, they continue to purchase these diet books, sending many of them all the way to the bestseller list. Because many of these meal plans are not sustainable and are questionable in their approaches, the consumer is ultimately left to continue searching, only able to choose from the newest "fad" promoted by publicists rather than being grounded in science. Thus, the search for the optimal diet continues to be the "holy grail" for many of us today, presenting a challenge for nutritionists and practitioners to provide sound advice to consumers. PMID:21139121
Optimization of Waste Disposal - 13338
Shephard, E.; Walter, N.; Downey, H.; Collopy, P.; Conant, J.
2013-07-01
From 2009 through 2011, remediation of areas of a former fuel cycle facility used for government contract work was conducted. Remediation efforts were focused on building demolition, underground pipeline removal, contaminated soil removal and removal of contaminated sediments from portions of an on-site stream. Prior to conducting the remediation field effort, planning and preparation for remediation (including strategic planning for waste characterization and disposal) was conducted during the design phase. During the remediation field effort, waste characterization and disposal practices were continuously reviewed and refined to optimize waste disposal practices. This paper discusses strategic planning for waste characterization and disposal that was employed in the design phase, and continuously reviewed and refined to optimize efficiency. (authors)
Optimal piecewise locally linear modeling
NASA Astrophysics Data System (ADS)
Harris, Chris J.; Hong, Xia; Feng, M.
1999-03-01
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
Optimization principles of dendritic structure
Cuntz, Hermann; Borst, Alexander; Segev, Idan
2007-01-01
Background Dendrites are the most conspicuous feature of neurons. However, the principles determining their structure are poorly understood. By employing cable theory and, for the first time, graph theory, we describe dendritic anatomy solely on the basis of optimizing synaptic efficacy with minimal resources. Results We show that dendritic branching topology can be well described by minimizing the path length from the neuron's dendritic root to each of its synaptic inputs while constraining the total length of wiring. Tapering of diameter toward the dendrite tip – a feature of many neurons – optimizes charge transfer from all dendritic synapses to the dendritic root while housekeeping the amount of dendrite volume. As an example, we show how dendrites of fly neurons can be closely reconstructed based on these two principles alone. PMID:17559645
Integrated Energy System Dispatch Optimization
Firestone, Ryan; Stadler, Michael; Marnay, Chris
2006-06-16
On-site cogeneration of heat and electricity, thermal and electrical storage, and curtailing/rescheduling demand options are often cost-effective to commercial and industrial sites. This collection of equipment and responsive consumption can be viewed as an integrated energy system(IES). The IES can best meet the sites cost or environmental objectives when controlled in a coordinated manner. However, continuously determining this optimal IES dispatch is beyond the expectations for operators of smaller systems. A new algorithm is proposed in this paper to approximately solve the real-time dispatch optimization problem for a generic IES containing an on-site cogeneration system subject to random outages, limited curtailment opportunities, an intermittent renewable electricity source, and thermal storage. An example demonstrates how this algorithm can be used in simulation to estimate the value of IES components.
[Optimization of radiological scoliosis assessment].
Enríquez, Goya; Piqueras, Joaquim; Catalá, Ana; Oliva, Glòria; Ruiz, Agustí; Ribas, Montserrat; Duran, Carmina; Rodrigo, Carlos; Rodríguez, Eugenia; Garriga, Victoria; Maristany, Teresa; García-Fontecha, César; Baños, Joan; Muchart, Jordi; Alava, Fernando
2014-07-01
Most scoliosis are idiopathic (80%) and occur more frequently in adolescent girls. Plain radiography is the imaging method of choice, both for the initial study and follow-up studies but has the disadvantage of using ionizing radiation. The breasts are exposed to x-ray along these repeated examinations. The authors present a range of recommendations in order to optimize radiographic exam technique for both conventional and digital x-ray settings to prevent unnecessary patients' radiation exposure and to reduce the risk of breast cancer in patients with scoliosis. With analogue systems, leaded breast protectors should always be used, and with any radiographic equipment, analog or digital radiography, the examination should be performed in postero-anterior projection and optimized low-dose techniques. The ALARA (as low as reasonable achievable) rule should always be followed to achieve diagnostic quality images with the lowest feasible dose.
Asymptotically optimal topological quantum compiling.
Kliuchnikov, Vadym; Bocharov, Alex; Svore, Krysta M
2014-04-11
We address the problem of compiling quantum operations into braid representations for non-Abelian quasiparticles described by the Fibonacci anyon model. We classify the single-qubit unitaries that can be represented exactly by Fibonacci anyon braids and use the classification to develop a probabilistically polynomial algorithm that approximates any given single-qubit unitary to a desired precision by an asymptotically depth-optimal braid pattern. We extend our algorithm in two directions: to produce braids that allow only single-strand movement, called weaves, and to produce depth-optimal approximations of two-qubit gates. Our compiled braid patterns have depths that are 20 to 1000 times shorter than those output by prior state-of-the-art methods, for precisions ranging between 10(-10) and 10(-30). PMID:24765934
Optimal designs for copula models
Perrone, E.; Müller, W.G.
2016-01-01
Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616
Optimality Principles of Undulatory Swimming
NASA Astrophysics Data System (ADS)
Nangia, Nishant; Bale, Rahul; Patankar, Neelesh
2015-11-01
A number of dimensionless quantities derived from a fish's kinematic and morphological parameters have been used to describe the hydrodynamics of swimming. In particular, body/caudal fin swimmers have been found to swim within a relatively narrow range of these quantities in nature, e.g., Strouhal number or the optimal specific wavelength. It has been hypothesized or shown that these constraints arise due to maximization of swimming speed, efficiency, or cost of transport in certain domains of this large dimensionless parameter space. Using fully resolved simulations of undulatory patterns, we investigate the existence of various optimality principles in fish swimming. Using scaling arguments, we relate various dimensionless parameters to each other. Based on these findings, we make design recommendations on how kinematic parameters for a swimming robot or vehicle should be chosen. This work is supported by NSF Grants CBET-0828749, CMMI-0941674, CBET-1066575 and the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.
Structural optimization: Challenges and opportunities
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1984-01-01
A review of developments in structural optimization techniques and their interface with growing computer capabilities is presented. Structural design steps comprise functional definition of an object, an evaluation phase wherein external influences are quantified, selection of the design concept, material, object geometry, and the internal layout, and quantification of the physical characteristics. Optimization of a fully stressed design is facilitated by use of nonlinear mathematical programming which permits automated definition of the physics of a problem. Design iterations terminate when convergence is acquired between mathematical and physical criteria. A constrained minimum algorithm has been formulated using an Augmented Lagrangian approach and a generalized reduced gradient to obtain fast convergence. Various approximation techniques are mentioned. The synergistic application of all the methods surveyed requires multidisciplinary teamwork during a design effort.
Optimal concentrations in nectar feeding
Kim, Wonjung; Gilet, Tristan; Bush, John W. M.
2011-01-01
Nectar drinkers must feed quickly and efficiently due to the threat of predation. While the sweetest nectar offers the greatest energetic rewards, the sharp increase of viscosity with sugar concentration makes it the most difficult to transport. We here demonstrate that the sugar concentration that optimizes energy transport depends exclusively on the drinking technique employed. We identify three nectar drinking techniques: active suction, capillary suction, and viscous dipping. For each, we deduce the dependence of the volume intake rate on the nectar viscosity and thus infer an optimal sugar concentration consistent with laboratory measurements. Our results provide the first rationale for why suction feeders typically pollinate flowers with lower sugar concentration nectar than their counterparts that use viscous dipping. PMID:21949358
Robust Optimization of Biological Protocols
Flaherty, Patrick; Davis, Ronald W.
2015-01-01
When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115
Mullin, Gerard E
2010-12-01
Since the beginning of time, we have been searching for diets that satisfy our palates while simultaneously optimizing health and well-being. Every year, there are hundreds of new diet books on the market that make a wide range of promises but rarely deliver. Unfortunately, consumers are gullible and believe much of the marketing hype because they are desperately seeking ways to maximize their health. As a result, they continue to purchase these diet books, sending many of them all the way to the bestseller list. Because many of these meal plans are not sustainable and are questionable in their approaches, the consumer is ultimately left to continue searching, only able to choose from the newest "fad" promoted by publicists rather than being grounded in science. Thus, the search for the optimal diet continues to be the "holy grail" for many of us today, presenting a challenge for nutritionists and practitioners to provide sound advice to consumers.
Optimal intervention strategies for tuberculosis
NASA Astrophysics Data System (ADS)
Bowong, Samuel; Aziz Alaoui, A. M.
2013-06-01
This paper deals with the problem of optimal control of a deterministic model of tuberculosis (abbreviated as TB for tubercle bacillus). We first present and analyze an uncontrolled tuberculosis model which incorporates the essential biological and epidemiological features of the disease. The model is shown to exhibit the phenomenon of backward bifurcation, where a stable disease-free equilibrium co-exists with one or more stable endemic equilibria when the associated basic reproduction number is less than the unity. Based on this continuous model, the tuberculosis control is formulated and solved as an optimal control problem, indicating how control terms on the chemoprophylaxis and detection should be introduced in the population to reduce the number of individuals with active TB. Results provide a framework for designing the cost-effective strategies for TB with two intervention methods.
Optimal transport and the placenta
Morgan, Simon; Xia, Qinglan; Salafia, Carolym
2010-01-01
The goal of this paper is to investigate the expected effects of (i) placental size, (ii) placental shape and (iii) the position of insertion of the umbilical cord on the work done by the foetus heart in pumping blood across the placenta. We use optimal transport theory and modeling to quantify the expected effects of these factors . Total transport cost and the shape factor contribution to cost are given by the optimal transport model. Total placental transport cost is highly correlated with birth weight, placenta weight, FPR and the metabolic scaling factor beta. The shape factor is also highly correlated with birth weight, and after adjustment for placental weight, is highly correlated with the metabolic scaling factor beta.
Formulation Optimization of Arecoline Patches
Wu, Pao-Chu; Tsai, Pi-Ju; Lin, Shin-Chen; Huang, Yaw-Bin
2014-01-01
The response surface methodology (RSM) including polynomial equations has been used to design an optimal patch formulation with appropriate adhesion and flux. The patch formulations were composed of different polymers, including Eudragit RS 100 (ERS), Eudragit RL 100 (ERL) and polyvinylpyrrolidone K30 (PVP), plasticizers (PEG 400), and drug. In addition, using terpenes as enhancers could increase the flux of the drug. Menthol showed the highest enhancement effect on the flux of arecoline. PMID:24707220
Optimization of reinforced concrete slabs
NASA Technical Reports Server (NTRS)
Ferritto, J. M.
1979-01-01
Reinforced concrete cells composed of concrete slabs and used to limit the effects of accidental explosions during hazardous explosives operations are analyzed. An automated design procedure which considers the dynamic nonlinear behavior of the reinforced concrete of arbitrary geometrical and structural configuration subjected to dynamic pressure loading is discussed. The optimum design of the slab is examined using an interior penalty function. The optimization procedure is presented and the results are discussed and compared with finite element analysis.
TEA laser gas mixture optimization
NASA Astrophysics Data System (ADS)
Lipchak, W. Michael; Luck, Clarence F.
1982-11-01
The topographical plot of an optimized parameter, such as pulse energy or peak power, on the gas mixture plane is presented as a useful aid in realizing optimum mixtures of helium, carbon dioxide, and nitrogen, for operation of CO2 TEA lasers. A method for generating such a plot is discussed and an example is shown. The potential benefits of this graphical technique are also discussed.
TEA laser gas mixture optimization
Lipchak, W.M.; Luck, C.F.
1982-11-01
The topographical plot of an optimized parameter, such as pulse energy or peak power, on the gas mixture plane is presented as a useful aid in realizing optimum mixtures of helium, carbon dioxide, and nitrogen, for operation of CO/sub 2/ TEA lasers. A method for generating such a plot is discussed and an example is shown. The potential benefits of this graphical technique are also discussed.
Optimization of Cylindrical Hall Thrusters
Yevgeny Raitses, Artem Smirnov, Erik Granstedt, and Nathaniel J. Fi
2007-07-24
The cylindrical Hall thruster features high ionization efficiency, quiet operation, and ion acceleration in a large volume-to-surface ratio channel with performance comparable with the state-of-the-art annular Hall thrusters. These characteristics were demonstrated in low and medium power ranges. Optimization of miniaturized cylindrical thrusters led to performance improvements in the 50-200W input power range, including plume narrowing, increased thruster efficiency, reliable discharge initiation, and stable operation. __________________________________________________
Optimization of Cylindrical Hall Thrusters
Yevgeny Raitses, Artem Smirnov, Erik Granstedt, and Nathaniel J. Fisch
2007-11-27
The cylindrical Hall thruster features high ionization efficiency, quiet operation, and ion acceleration in a large volume-to-surface ratio channel with performance comparable with the state-of-the-art annular Hall thrusters. These characteristics were demonstrated in low and medium power ranges. Optimization of miniaturized cylindrical thrusters led to performance improvements in the 50-200W input power range, including plume narrowing, increased thruster efficiency, reliable discharge initiation, and stable operation.
Optimal inference with chaotic dynamics
NASA Technical Reports Server (NTRS)
Harger, R. O.
1983-01-01
Nonlinear mappings that exhibit chaotic, seemingly random, evolution have appeal as models of dynamic systems. Their deterministic evolution, vis-a-vis Markov evolutions, results in much simpler optimal detection and estimation algorithms. The variation of a chaotic parameter (mu) results in diverse evolutions, suggesting a simple but rich source of model variations. For the specific mapping examined, this latter possibility is problematic due to the extreme sensitivity on mu of the evolution in the chaotic regime.
Optimizing Sustainable Geothermal Heat Extraction
NASA Astrophysics Data System (ADS)
Patel, Iti; Bielicki, Jeffrey; Buscheck, Thomas
2016-04-01
Geothermal heat, though renewable, can be depleted over time if the rate of heat extraction exceeds the natural rate of renewal. As such, the sustainability of a geothermal resource is typically viewed as preserving the energy of the reservoir by weighing heat extraction against renewability. But heat that is extracted from a geothermal reservoir is used to provide a service to society and an economic gain to the provider of that service. For heat extraction used for market commodities, sustainability entails balancing the rate at which the reservoir temperature renews with the rate at which heat is extracted and converted into economic profit. We present a model for managing geothermal resources that combines simulations of geothermal reservoir performance with natural resource economics in order to develop optimal heat mining strategies. Similar optimal control approaches have been developed for managing other renewable resources, like fisheries and forests. We used the Non-isothermal Unsaturated-saturated Flow and Transport (NUFT) model to simulate the performance of a sedimentary geothermal reservoir under a variety of geologic and operational situations. The results of NUFT are integrated into the optimization model to determine the extraction path over time that maximizes the net present profit given the performance of the geothermal resource. Results suggest that the discount rate that is used to calculate the net present value of economic gain is a major determinant of the optimal extraction path, particularly for shallower and cooler reservoirs, where the regeneration of energy due to the natural geothermal heat flux is a smaller percentage of the amount of energy that is extracted from the reservoir.
Optimization of radioactive waste storage.
Dellamano, José Claudio; Sordi, Gian-Maria A A
2007-02-01
In several countries, low-level radioactive wastes are treated and stored awaiting construction and operation of a final repository. In some cases, interim storage may be extended for decades requiring special attention regarding security issues. The International Atomic Energy Agency (IAEA) recommends segregation of wastes that may be exempted from interim storage or ultimate disposal. The paper presents a method to optimize the decision making process regarding exemption vs. interim storage or ultimate disposal of these wastes. PMID:17228185
Blanket optimization studies for Cascade
Meier, W.R.; Morse, E.C.
1985-02-28
A nonlinear, multivariable, blanket optimization technique is applied to the Cascade inertial confinement fusion reactor concept. The thickness of a two-zone blanket, which consists of a BeO multiplier region followed by a LiAlO/sub 2/ breeding region, is minimized subject to constraints on the tritium breeding ratio, neutron leakage, and heat generation rate in Al/SiC tendons that support the chamber wall.
Switching strategies to optimize search
NASA Astrophysics Data System (ADS)
Shlesinger, Michael F.
2016-03-01
Search strategies are explored when the search time is fixed, success is probabilistic and the estimate for success can diminish with time if there is not a successful result. Under the time constraint the problem is to find the optimal time to switch a search strategy or search location. Several variables are taken into account, including cost, gain, rate of success if a target is present and the probability that a target is present.
Multiscale optimization in neural nets.
Mjolsness, E; Garrett, C D; Miranker, W L
1991-01-01
One way to speed up convergence in a large optimization problem is to introduce a smaller, approximate version of the problem at a coarser scale and to alternate between relaxation steps for the fine-scale and coarse-scale problems. Such an optimization method for neural networks governed by quite general objective functions is presented. At the coarse scale, there is a smaller approximating neural net which, like the original net, is nonlinear and has a nonquadratic objective function. The transitions and information flow from fine to coarse scale and back do not disrupt the optimization, and the user need only specify a partition of the original fine-scale variables. Thus, the method can be applied easily to many problems and networks. There is generally about a fivefold improvement in estimated cost under the multiscale method. In the networks to which it was applied, a nontrivial speedup by a constant factor of between two and five was observed, independent of problem size. Further improvements in computational cost are very likely to be available, especially for problem-specific multiscale neural net methods.
Optimization of Supersonic Transport Trajectories
NASA Technical Reports Server (NTRS)
Ardema, Mark D.; Windhorst, Robert; Phillips, James
1998-01-01
This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.
Feasible optimality implies Hack's Law
NASA Astrophysics Data System (ADS)
Rigon, Riccardo; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea
1998-11-01
We analyze the elongation (the scaling properties of drainage area with mainstream length) in optimal channel networks (OCNs) obtained through different algorithms searching for the minimum of a functional computing the total energy dissipation of the drainage system. The algorithms have different capabilities to overcome the imprinting of initial and boundary conditions, and thus they have different chances of attaining the global optimum. We find that suboptimal shapes, i.e., dynamically accessible states characterized by locally stationary total potential energy, show the robust type of elongation that is consistently observed in nature. This suggestive and directly measurable property is not found in the so-called ground state, i.e., the global minimum, whose features, including elongation, are known exactly. The global minimum is shown to be too regular and symmetric to be dynamically accessible in nature, owing to features and constraints of erosional processes. Thus Hack's law is seen as a signature of feasible optimality thus yielding further support to the suggestion that optimality of the system as a whole explains the dynamic origin of fractal forms in nature.
Optimal Stopping with Information Constraint
Lempa, Jukka
2012-10-15
We study the optimal stopping problem proposed by Dupuis and Wang (Adv. Appl. Probab. 34:141-157, 2002). In this maximization problem of the expected present value of the exercise payoff, the underlying dynamics follow a linear diffusion. The decision maker is not allowed to stop at any time she chooses but rather on the jump times of an independent Poisson process. Dupuis and Wang (Adv. Appl. Probab. 34:141-157, 2002), solve this problem in the case where the underlying is a geometric Brownian motion and the payoff function is of American call option type. In the current study, we propose a mild set of conditions (covering the setup of Dupuis and Wang in Adv. Appl. Probab. 34:141-157, 2002) on both the underlying and the payoff and build and use a Markovian apparatus based on the Bellman principle of optimality to solve the problem under these conditions. We also discuss the interpretation of this model as optimal timing of an irreversible investment decision under an exogenous information constraint.
Optimizing imperfect cloaks to perfection.
Cai, Liang-Wu
2012-10-01
Transformation optics has been an essential tool for designing cloaking devices for electromagnetic and acoustic waves. All these designs have one requirement in common: material singularity. At the interface between the cloak and the cloaked region, some material properties have to approach infinity, while some others approach zero. This paper attempts to answer a central question in physically realizing such cloaks: is material singularity a requirement for perfect cloaking? This paper demonstrates that, through optimization, perfect cloaking can be achieved using a layered cloak construction without material singularity. Two examples are used for this demonstration. In one example, the initial design is based on the Cummer-Schurig prescription for acoustic cloaking that requires mass-anisotropic material. Another example uses the two isotropic layers to achieve the equivalent mass-anisotropy for each anisotropic layer. During the optimization processes, only material properties of cloaks' constituent layers are adjusted while the geometries remain unchanged. In both examples, the normalized total scattering cross section can be reduced to 0.002 (0.2%) or lower in numerical computations. The capabilities and other characteristics of the optimization in other tasks such as cloaking penetrable objects and isolating strong resonance in such objects are also explored. PMID:23039559
Radiation Shielding Optimization on Mars
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Mertens, Chris J.; Blattnig, Steve R.
2013-01-01
Future space missions to Mars will require radiation shielding to be optimized for deep space transit and an extended stay on the surface. In deep space, increased shielding levels and material optimization will reduce the exposure from most solar particle events (SPE) but are less effective at shielding against galactic cosmic rays (GCR). On the surface, the shielding provided by the Martian atmosphere greatly reduces the exposure from most SPE, and long-term GCR exposure is a primary concern. Previous work has shown that in deep space, additional shielding of common materials such as aluminum or polyethylene does not significantly reduce the GCR exposure. In this work, it is shown that on the Martian surface, almost any amount of aluminum shielding increases exposure levels for humans. The increased exposure levels are attributed to neutron production in the shield and Martian regolith as well as the electromagnetic cascade induced in the Martian atmosphere. This result is significant for optimization of vehicle and shield designs intended for the surface of Mars.
Architecture of optimal transport networks
NASA Astrophysics Data System (ADS)
Durand, Marc
2006-01-01
We analyze the structure of networks minimizing the global resistance to flow (or dissipative energy) with respect to two different constraints: fixed total channel volume and fixed total channel surface area. First, we show that channels must be straight and have uniform cross-sectional areas in such optimal networks. We then establish a relation between the cross-sectional areas of adjoining channels at each junction. Indeed, this relation is a generalization of Murray’s law, originally established in the context of local optimization. We establish a relation too between angles and cross-sectional areas of adjoining channels at each junction, which can be represented as a vectorial force balance equation, where the force weight depends on the channel cross-sectional area. A scaling law between the minimal resistance value and the total volume or surface area value is also derived from the analysis. Furthermore, we show that no more than three or four channels meet at each junction of optimal bidimensional networks, depending on the flow profile (e.g., Poiseuille-like or pluglike) and the considered constraint (fixed volume or surface area). In particular, we show that sources are directly connected to wells, without intermediate junctions, for minimal resistance networks preserving the total channel volume in case of plug flow regime. Finally, all these results are compared with the structure of natural networks.
Optimization of Micromachined Photon Devices
Datskos, P.G.; Datskou, I.; Evans, B.M., III; Rajic, S.
1999-07-18
The Oak Ridge National Laboratory has been instrumental in developing ultraprecision technologies for the fabrication of optical devices. We are currently extending our ultraprecision capabilities to the design, fabrication, and testing of micro-optics and MEMS devices. Techniques have been developed in our lab for fabricating micro-devices using single point diamond turning and ion milling. The devices we fabricated can be used in micro-scale interferometry, micro-positioners, micro-mirrors, and chemical sensors. In this paper, we focus on the optimization of microstructure performance using finite element analysis and the experimental validation of those results. We also discuss the fabrication of such structures and the optical testing of the devices. The performance is simulated using finite element analysis to optimize geometric and material parameters. The parameters we studied include bimaterial coating thickness effects; device length, width, and thickness effects, as well as changes in the geometry itself. This optimization results in increased sensitivity of these structures to absorbed incoming energy, which is important for photon detection or micro-mirror actuation. We have investigated and tested multiple geometries. The devices were fabricated using focused ion beam milling, and their response was measured using a chopped photon source and laser triangulation techniques. Our results are presented and discussed.
Optimal segmentation and packaging process
Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.
1999-08-10
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.
Combined control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Milman, M.; Bruno, R.; Scheid, R.; Gibson, S.
1989-01-01
An approach for combined control-structure optimization keyed to enhancing early design trade-offs is outlined and illustrated by numerical examples. The approach employs a homotopic strategy and appears to be effective for generating families of designs that can be used in these early trade studies. Analytical results were obtained for classes of structure/control objectives with linear quadratic Gaussian (LQG) and linear quadratic regulator (LQR) costs. For these, researchers demonstrated that global optima can be computed for small values of the homotopy parameter. Conditions for local optima along the homotopy path were also given. Details of two numerical examples employing the LQR control cost were given showing variations of the optimal design variables along the homotopy path. The results of the second example suggest that introducing a second homotopy parameter relating the two parts of the control index in the LQG/LQR formulation might serve to enlarge the family of Pareto optima, but its effect on modifying the optimal structural shapes may be analogous to the original parameter lambda.
Modeling and optimization of cryopreservation.
D Benson, James
2015-01-01
Modeling plays a critical role in understanding the biophysical processes behind cryopreservation. It facilitates understanding of the biophysical and some of the biochemical mechanisms of damage during all phases of cryopreservation including CPA equilibration, cooling, and warming. Modeling also provides a tool for optimization of cryopreservation protocols and has yielded a number of successes in this regard. While modern cryobiological modeling includes very detailed descriptions of the physical phenomena that occur during freezing, including ice growth kinetics and spatial gradients that define heat and mass transport models, here we reduce the complexity and approach only a small but classic subset of these problems. Namely, here we describe the process of building and using a mathematical model of a cell in suspension where spatial homogeneity is assumed for all quantities. We define the models that describe the critical cell quantities used to describe optimal and suboptimal protocols and then give an overview of classical methods of how to determine optimal protocols using these models. PMID:25428003
Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs
NASA Technical Reports Server (NTRS)
Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.
1998-01-01
This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.
Optimal scaling in ductile fracture
NASA Astrophysics Data System (ADS)
Fokoua Djodom, Landry
This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity
Risk Analysis for Resource Planning Optimization
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming
2008-01-01
This paper describes a systems engineering approach to resource planning by integrating mathematical modeling and constrained optimization, empirical simulation, and theoretical analysis techniques to generate an optimal task plan in the presence of uncertainties.
HOPSPACK: Hybrid Optimization Parallel Search Package.
Gray, Genetha Anne.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica L.
2008-12-01
In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4
Program Aids Analysis And Optimization Of Design
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1994-01-01
NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.
Simulated annealing algorithm for optimal capital growth
NASA Astrophysics Data System (ADS)
Luo, Yong; Zhu, Bo; Tang, Yong
2014-08-01
We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.
Global optimality of extremals: An example
NASA Technical Reports Server (NTRS)
Kreindler, E.; Newman, F.
1980-01-01
The question of the existence and location of Darboux points is crucial for minimally sufficient conditions for global optimality and for computation of optimal trajectories. A numerical investigation is presented of the Darboux points and their relationship with conjugate points for a problem of minimum fuel, constant velocity, and horizontal aircraft turns to capture a line. This simple second order optimal control problem shows that ignoring the possible existence of Darboux points may play havoc with the computation of optimal trajectories.
Optimizing Dynamical Network Structure for Pinning Control.
Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo
2016-04-12
Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.
Genetic algorithms - What fitness scaling is optimal?
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac
1993-01-01
A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.
Optimizing Dynamical Network Structure for Pinning Control
NASA Astrophysics Data System (ADS)
Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo
2016-04-01
Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.
Educational Optimism among Parents: A Pilot Study
ERIC Educational Resources Information Center
Räty, Hannu; Kasanen, Kati
2016-01-01
This study explored parents' (N = 351) educational optimism in terms of their trust in the possibilities of school to develop children's intelligence. It was found that educational optimism could be depicted as a bipolar factor with optimism and pessimism on the opposing ends of the same dimension. Optimistic parents indicated more satisfaction…
Merits and limitations of optimality criteria method for structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo
1993-01-01
The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.
Optimal measurements for nonlocal correlations
NASA Astrophysics Data System (ADS)
Schwarz, Sacha; Stefanov, André; Wolf, Stefan; Montina, Alberto
2016-08-01
A problem in quantum information theory is to find the experimental setup that maximizes the nonlocality of correlations with respect to some suitable measure such as the violation of Bell inequalities. There are however some complications with Bell inequalities. First and foremost it is unfeasible to determine the whole set of Bell inequalities already for a few measurements and thus unfeasible to find the experimental setup maximizing their violation. Second, the Bell violation suffers from an ambiguity stemming from the choice of the normalization of the Bell coefficients. An alternative measure of nonlocality with a direct information-theoretic interpretation is the minimal amount of classical communication required for simulating nonlocal correlations. In the case of many instances simulated in parallel, the minimal communication cost per instance is called nonlocal capacity, and its computation can be reduced to a convex-optimization problem. This quantity can be computed for a higher number of measurements and turns out to be useful for finding the optimal experimental setup. Focusing on the bipartite case, we present a simple method for maximizing the nonlocal capacity over a given configuration space and, in particular, over a set of possible measurements, yielding the corresponding optimal setup. Furthermore, we show that there is a functional relationship between Bell violation and nonlocal capacity. The method is illustrated with numerical tests and compared with the maximization of the violation of CGLMP-type Bell inequalities on the basis of entangled two-qubit as well as two-qutrit states. Remarkably, the anomaly of nonlocality displayed by qutrits turns out to be even stronger if the nonlocal capacity is employed as a measure of nonlocality.
Optimizing High Level Waste Disposal
Dirk Gombert
2005-09-01
If society is ever to reap the potential benefits of nuclear energy, technologists must close the fuel-cycle completely. A closed cycle equates to a continued supply of fuel and safe reactors, but also reliable and comprehensive closure of waste issues. High level waste (HLW) disposal in borosilicate glass (BSG) is based on 1970s era evaluations. This host matrix is very adaptable to sequestering a wide variety of radionuclides found in raffinates from spent fuel reprocessing. However, it is now known that the current system is far from optimal for disposal of the diverse HLW streams, and proven alternatives are available to reduce costs by billions of dollars. The basis for HLW disposal should be reassessed to consider extensive waste form and process technology research and development efforts, which have been conducted by the United States Department of Energy (USDOE), international agencies and the private sector. Matching the waste form to the waste chemistry and using currently available technology could increase the waste content in waste forms to 50% or more and double processing rates. Optimization of the HLW disposal system would accelerate HLW disposition and increase repository capacity. This does not necessarily require developing new waste forms, the emphasis should be on qualifying existing matrices to demonstrate protection equal to or better than the baseline glass performance. Also, this proposed effort does not necessarily require developing new technology concepts. The emphasis is on demonstrating existing technology that is clearly better (reliability, productivity, cost) than current technology, and justifying its use in future facilities or retrofitted facilities. Higher waste processing and disposal efficiency can be realized by performing the engineering analyses and trade-studies necessary to select the most efficient methods for processing the full spectrum of wastes across the nuclear complex. This paper will describe technologies being
Phase unwrapping using discontinuity optimization
Flynn, T.J.
1998-03-01
In SAR interferometry, the periodicity of the phase must be removed using two-dimensional phase unwrapping. The goal of the procedure is to find a smooth surface in which large spatial phase differences, called discontinuities, are restricted to places where their presence is reasonable. The pioneering work of Goldstein et al. identified points of local unwrap inconsistency called residues, which must be connected by discontinuities. This paper presents an overview of recent work that treats phase unwrapping as a discrete optimization problem with the constraint that residues must be connected. Several algorithms use heuristic methods to reduce the total number of discontinuities. Constantini has introduced the weighted sum of discontinuity magnitudes as a criterion of unwrap error and shown how algorithms from optimization theory are used to minimize it. Pixels of low quality are given low weight to guide discontinuities away from smooth, high-quality regions. This method is generally robust, but if noise is severe it underestimates the steepness of slopes and the heights of peaks. This problem is mitigated by subtracting (modulo 2{pi}) a smooth estimate of the unwrapped phase from the data, then unwrapping the resulting residual phase. The unwrapped residual is added to the smooth estimate to produce the final unwrapped phase. The estimate can be computed by lowpass filtering of an existing unwrapped phase; this makes possible an iterative algorithm in which the result of each iteration provides the estimate for the next. An example illustrates the results of optimal discontinuity placement and the improvement from unwrapping of the residual phase.
Promoting Optimal Care in Childbirth
Lothian, Judith A.
2014-01-01
In 1996, the World Health Organization set out guidelines for normal birth. Because that time birth in the United States has continued to be intervention intensive, the cesarean rate has skyrocketed and maternal mortality, although low, is rising. At the same time, research continues to provide evidence for the benefits of supporting the normal physiologic process of labor and birth and the risks of interfering with this natural process. This article reviews the current state of U.S. maternity care and discusses research and advocacy efforts that address this issue. This article describes optimal care in childbirth and introduces the Lamaze International Six Healthy Birth Practices. PMID:25411536
Data Assimilation with Optimal Maps
NASA Astrophysics Data System (ADS)
El Moselhy, T.; Marzouk, Y.
2012-12-01
Tarek El Moselhy and Youssef Marzouk Massachusetts Institute of Technology We present a new approach to Bayesian inference that entirely avoids Markov chain simulation and sequential importance resampling, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. The map is written as a multivariate polynomial expansion and computed efficiently through the solution of a stochastic optimization problem. While our previous work [1] focused on static Bayesian inference problems, we now extend the map-based approach to sequential data assimilation, i.e., nonlinear filtering and smoothing. One scheme involves pushing forward a fixed reference measure to each filtered state distribution, while an alternative scheme computes maps that push forward the filtering distribution from one stage to the other. We compare the performance of these schemes and extend the former to problems of smoothing, using a map implementation of the forward-backward smoothing formula. Advantages of a map-based representation of the filtering and smoothing distributions include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent uniformly-weighted posterior samples without additional evaluations of the dynamical model. Perhaps the main advantage, however, is that the map approach inherently avoids issues of sample impoverishment, since it explicitly represents the posterior as the pushforward of a reference measure, rather than with a particular set of samples. The computational complexity of our algorithm is comparable to state-of-the-art particle filters. Moreover, the accuracy of the approach is controlled via the convergence criterion of the underlying optimization problem. We demonstrate the efficiency and accuracy of the map approach via data assimilation in
Optimal breast cancer pathology manifesto.
Tot, T; Viale, G; Rutgers, E; Bergsten-Nordström, E; Costa, A
2015-11-01
This manifesto was prepared by a European Breast Cancer (EBC) Council working group and launched at the European Breast Cancer Conference in Glasgow on 20 March 2014. It sets out optimal technical and organisational requirements for a breast cancer pathology service, in the light of concerns about variability and lack of patient-centred focus. It is not a guideline about how pathology services should be performed. It is a call for all in the cancer community--pathologists, oncologists, patient advocates, health administrators and policymakers--to check that services are available that serve the needs of patients in a high quality, timely way.
Optimal breast cancer pathology manifesto.
Tot, T; Viale, G; Rutgers, E; Bergsten-Nordström, E; Costa, A
2015-11-01
This manifesto was prepared by a European Breast Cancer (EBC) Council working group and launched at the European Breast Cancer Conference in Glasgow on 20 March 2014. It sets out optimal technical and organisational requirements for a breast cancer pathology service, in the light of concerns about variability and lack of patient-centred focus. It is not a guideline about how pathology services should be performed. It is a call for all in the cancer community--pathologists, oncologists, patient advocates, health administrators and policymakers--to check that services are available that serve the needs of patients in a high quality, timely way. PMID:26283037
Trajectory Analysis and Optimization System
1996-06-04
TAOS is a general-purpose software tool capable of analyzing nearly any type of three degree-of-freedom point-mass, high-speed trajectory. Input files contain aerodynamic coefficients, propulsion data, and a trajectory description. The trajectory description divides the trajectory into segments, and within each segment, guidance rules provided by the user describe how the trajectory is computed. Output files contain tabulated trajectory information such as position, velocity, and acceleration. Parametric optimization provides a powerful method for satisfying mission-planning constraints,more » and trajectories involving more than one vehicle can be computed within a single problem.« less
Hubble Systems Optimize Hospital Schedules
NASA Technical Reports Server (NTRS)
2009-01-01
Don Rosenthal, a former Ames Research Center computer scientist who helped design the Hubble Space Telescope's scheduling software, co-founded Allocade Inc. of Menlo Park, California, in 2004. Allocade's OnCue software helps hospitals reclaim unused capacity and optimize constantly changing schedules for imaging procedures. After starting to use the software, one medical center soon reported noticeable improvements in efficiency, including a 12 percent increase in procedure volume, 35 percent reduction in staff overtime, and significant reductions in backlog and technician phone time. Allocade now offers versions for outpatient and inpatient magnetic resonance imaging (MRI), ultrasound, interventional radiology, nuclear medicine, Positron Emission Tomography (PET), radiography, radiography-fluoroscopy, and mammography.
Optimal broadcasting of mixed states
Dang Guifang; Fan Heng
2007-08-15
The N to M (M{>=}N) universal quantum broadcasting of mixed states {rho}{sup xN} is proposed for a qubit system. The broadcasting of mixed states is universal and optimal in the sense that the shrinking factor is independent of the input state and achieves the upper bound. The quantum broadcasting of mixed qubits is a generalization of the universal quantum cloning machine for identical pure input states. A pure state decomposition of the identical mixed qubits {rho}{sup xN} is obtained.
Pilot-optimal augmentation synthesis
NASA Technical Reports Server (NTRS)
Schmidt, D. K.
1978-01-01
An augmentation synthesis method usable in the absence of quantitative handling qualities specifications, and yet explicitly including design objectives based on pilot-rating concepts, is presented. The algorithm involves the unique approach of simultaneously solving for the stability augmentation system (SAS) gains, pilot equalization and pilot rating prediction via optimal control techniques. Simultaneous solution is required in this case since the pilot model (gains, etc.) depends upon the augmented plant dynamics, and the augmentation is obviously not a priori known. Another special feature is the use of the pilot's objective function (from which the pilot model evolves) to design the SAS.
New optimal quantum convolutional codes
NASA Astrophysics Data System (ADS)
Zhu, Shixin; Wang, Liqi; Kai, Xiaoshan
2015-04-01
One of the most challenges to prove the feasibility of quantum computers is to protect the quantum nature of information. Quantum convolutional codes are aimed at protecting a stream of quantum information in a long distance communication, which are the correct generalization to the quantum domain of their classical analogs. In this paper, we construct some classes of quantum convolutional codes by employing classical constacyclic codes. These codes are optimal in the sense that they attain the Singleton bound for pure convolutional stabilizer codes.
Optimization of hydraulic turbine diffuser
NASA Astrophysics Data System (ADS)
Moravec, Prokop; Hliník, Juraj; Rudolf, Pavel
2016-03-01
Hydraulic turbine diffuser recovers pressure energy from residual kinetic energy on turbine runner outlet. Efficiency of this process is especially important for high specific speed turbines, where almost 50% of available head is utilized within diffuser. Magnitude of the coefficient of pressure recovery can be significantly influenced by designing its proper shape. Present paper focuses on mathematical shape optimization method coupled with CFD. First method is based on direct search Nelder-Mead algorithm, while the second method employs adjoint solver and morphing. Results obtained with both methods are discussed and their advantages/disadvantages summarized.
Optimizing the neonatal thermal environment.
Sherman, Tami Irwin; Greenspan, Jay S; St Clair, Nancy; Touch, Suzanne M; Shaffer, Thomas H
2006-01-01
Devices used to maintain thermal stability in preterm infants have advanced over time from the first incubator reported by Jean-Louis-Paul Denuce in 1857 to the latest Versalet Incuwarmer and Giraffe Omnibed devices today. Optimizing the thermal environment has proven significant for improving the chances of survival for small infants. Understanding the basic physiologic principles and current methodology of thermoregulation is important in the clinical care of these tiny infants. This article highlights principles of thermoregulation and the technologic advances that provide thermal support to our vulnerable
Optimized microsystems-enabled photovoltaics
Cruz-Campa, Jose Luis; Nielson, Gregory N.; Young, Ralph W.; Resnick, Paul J.; Okandan, Murat; Gupta, Vipin P.
2015-09-22
Technologies pertaining to designing microsystems-enabled photovoltaic (MEPV) cells are described herein. A first restriction for a first parameter of an MEPV cell is received. Subsequently, a selection of a second parameter of the MEPV cell is received. Values for a plurality of parameters of the MEPV cell are computed such that the MEPV cell is optimized with respect to the second parameter, wherein the values for the plurality of parameters are computed based at least in part upon the restriction for the first parameter.
Magnetic design optimization using variable metrics
Davey, K.R.
1995-11-01
The optimal design of a magnet assembly for a magnetic levitated train is approached using a three step process. First, the key parameters within the objective performance index are computed for the variation range of the problem. Second, the performance index is fitted to a smooth polynomial involving products of the powers of all variables. Third, a constrained optimization algorithm is employed to predict the optimal choice of the variables. An assessment of the integrity of the optimization program is obtained by comparing the final optimized solution with that predicted by the field analysis in the final configuration. Additional field analysis is recommended around the final solution to fine tune the solution.
Design optimization method for Francis turbine
NASA Astrophysics Data System (ADS)
Kawajiri, H.; Enomoto, Y.; Kurosawa, S.
2014-03-01
This paper presents a design optimization system coupled CFD. Optimization algorithm of the system employs particle swarm optimization (PSO). Blade shape design is carried out in one kind of NURBS curve defined by a series of control points. The system was applied for designing the stationary vanes and the runner of higher specific speed francis turbine. As the first step, single objective optimization was performed on stay vane profile, and second step was multi-objective optimization for runner in wide operating range. As a result, it was confirmed that the design system is useful for developing of hydro turbine.
Integrated multidisciplinary design optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Mantay, Wayne R.
1989-01-01
The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The optimization formulation is described in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.
Product Distributions for Distributed Optimization. Chapter 1
NASA Technical Reports Server (NTRS)
Bieniawski, Stefan R.; Wolpert, David H.
2004-01-01
With connections to bounded rational game theory, information theory and statistical mechanics, Product Distribution (PD) theory provides a new framework for performing distributed optimization. Furthermore, PD theory extends and formalizes Collective Intelligence, thus connecting distributed optimization to distributed Reinforcement Learning (FU). This paper provides an overview of PD theory and details an algorithm for performing optimization derived from it. The approach is demonstrated on two unconstrained optimization problems, one with discrete variables and one with continuous variables. To highlight the connections between PD theory and distributed FU, the results are compared with those obtained using distributed reinforcement learning inspired optimization approaches. The inter-relationship of the techniques is discussed.
Enhanced ant colony optimization for multiscale problems
NASA Astrophysics Data System (ADS)
Hu, Nan; Fish, Jacob
2016-03-01
The present manuscript addresses the issue of computational complexity of optimizing nonlinear composite materials and structures at multiple scales. Several solutions are detailed to meet the enormous computational challenge of optimizing nonlinear structures at multiple scales including: (i) enhanced sampling procedure that provides superior performance of the well-known ant colony optimization algorithm, (ii) a mapping-based meshing of a representative volume element that unlike unstructured meshing permits sensitivity analysis on coarse meshes, and (iii) a multilevel optimization procedure that takes advantage of possible weak coupling of certain scales. We demonstrate the proposed optimization procedure on elastic and inelastic laminated plates involving three scales.
Flat-plate photovoltaic array design optimization
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1980-01-01
An analysis is presented which integrates the results of specific studies in the areas of photovoltaic structural design optimization, optimization of array series/parallel circuit design, thermal design optimization, and optimization of environmental protection features. The analysis is based on minimizing the total photovoltaic system life-cycle energy cost including repair and replacement of failed cells and modules. This approach is shown to be a useful technique for array optimization, particularly when time-dependent parameters such as array degradation and maintenance are involved.
Automatic discovery of optimal classes
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew
1986-01-01
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.
Image-driven mesh optimization
Lindstrom, P; Turk, G
2001-01-05
We describe a method of improving the appearance of a low vertex count mesh in a manner that is guided by rendered images of the original, detailed mesh. This approach is motivated by the fact that greedy simplification methods often yield meshes that are poorer than what can be represented with a given number of vertices. Our approach relies on edge swaps and vertex teleports to alter the mesh connectivity, and uses the downhill simplex method to simultaneously improve vertex positions and surface attributes. Note that this is not a simplification method--the vertex count remains the same throughout the optimization. At all stages of the optimization the changes are guided by a metric that measures the differences between rendered versions of the original model and the low vertex count mesh. This method creates meshes that are geometrically faithful to the original model. Moreover, the method takes into account more subtle aspects of a model such as surface shading or whether cracks are visible between two interpenetrating parts of the model.
The venom optimization hypothesis revisited.
Morgenstern, David; King, Glenn F
2013-03-01
Animal venoms are complex chemical mixtures that typically contain hundreds of proteins and non-proteinaceous compounds, resulting in a potent weapon for prey immobilization and predator deterrence. However, because venoms are protein-rich, they come with a high metabolic price tag. The metabolic cost of venom is sufficiently high to result in secondary loss of venom whenever its use becomes non-essential to survival of the animal. The high metabolic cost of venom leads to the prediction that venomous animals may have evolved strategies for minimizing venom expenditure. Indeed, various behaviors have been identified that appear consistent with frugality of venom use. This has led to formulation of the "venom optimization hypothesis" (Wigger et al. (2002) Toxicon 40, 749-752), also known as "venom metering", which postulates that venom is metabolically expensive and therefore used frugally through behavioral control. Here, we review the available data concerning economy of venom use by animals with either ancient or more recently evolved venom systems. We conclude that the convergent nature of the evidence in multiple taxa strongly suggests the existence of evolutionary pressures favoring frugal use of venom. However, there remains an unresolved dichotomy between this economy of venom use and the lavish biochemical complexity of venom, which includes a high degree of functional redundancy. We discuss the evidence for biochemical optimization of venom as a means of resolving this conundrum.
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Environmental statistics and optimal regulation
NASA Astrophysics Data System (ADS)
Sivak, David; Thomson, Matt
2015-03-01
The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
Optimization in fractional aircraft ownership
NASA Astrophysics Data System (ADS)
Septiani, R. D.; Pasaribu, H. M.; Soewono, E.; Fayalita, R. A.
2012-05-01
Fractional Aircraft Ownership is a new concept in flight ownership management system where each individual or corporation may own a fraction of an aircraft. In this system, the owners have privilege to schedule their flight according to their needs. Fractional management companies (FMC) manages all aspects of aircraft operations, including utilization of FMC's aircraft in combination of outsourced aircrafts. This gives the owners the right to enjoy the benefits of private aviations. However, FMC may have complicated business requirements that neither commercial airlines nor charter airlines faces. Here, optimization models are constructed to minimize the number of aircrafts in order to maximize the profit and to minimize the daily operating cost. In this paper, three kinds of demand scenarios are made to represent different flight operations from different types of fractional owners. The problems are formulated as an optimization of profit and a daily operational cost to find the optimum flight assignments satisfying the weekly and daily demand respectively from the owners. Numerical results are obtained by Genetic Algorithm method.
Path optimization for oil probe
NASA Astrophysics Data System (ADS)
Smith, O'Neil; Rahmes, Mark; Blue, Mark; Peter, Adrian
2014-05-01
We discuss a robust method for optimal oil probe path planning inspired by medical imaging. Horizontal wells require three-dimensional steering made possible by the rotary steerable capabilities of the system, which allows the hole to intersect multiple target shale gas zones. Horizontal "legs" can be over a mile long; the longer the exposure length, the more oil and natural gas is drained and the faster it can flow. More oil and natural gas can be produced with fewer wells and less surface disturbance. Horizontal drilling can help producers tap oil and natural gas deposits under surface areas where a vertical well cannot be drilled, such as under developed or environmentally sensitive areas. Drilling creates well paths which have multiple twists and turns to try to hit multiple accumulations from a single well location. Our algorithm can be used to augment current state of the art methods. Our goal is to obtain a 3D path with nodes describing the optimal route to the destination. This algorithm works with BIG data and saves cost in planning for probe insertion. Our solution may be able to help increase the energy extracted vs. input energy.
Optimal foraging in semantic memory.
Hills, Thomas T; Jones, Michael N; Todd, Peter M
2012-04-01
Do humans search in memory using dynamic local-to-global search strategies similar to those that animals use to forage between patches in space? If so, do their dynamic memory search policies correspond to optimal foraging strategies seen for spatial foraging? Results from a number of fields suggest these possibilities, including the shared structure of the search problems-searching in patchy environments-and recent evidence supporting a domain-general cognitive search process. To investigate these questions directly, we asked participants to recover from memory as many animal names as they could in 3 min. Memory search was modeled over a representation of the semantic search space generated from the BEAGLE memory model of Jones and Mewhort (2007), via a search process similar to models of associative memory search (e.g., Raaijmakers & Shiffrin, 1981). We found evidence for local structure (i.e., patches) in memory search and patch depletion preceding dynamic local-to-global transitions between patches. Dynamic models also significantly outperformed nondynamic models. The timing of dynamic local-to-global transitions was consistent with optimal search policies in space, specifically the marginal value theorem (Charnov, 1976), and participants who were more consistent with this policy recalled more items.
Evolutionary optimization of protein folding.
Debès, Cédric; Wang, Minglei; Caetano-Anollés, Gustavo; Gräter, Frauke
2013-01-01
Nature has shaped the make up of proteins since their appearance, [Formula: see text]3.8 billion years ago. However, the fundamental drivers of structural change responsible for the extraordinary diversity of proteins have yet to be elucidated. Here we explore if protein evolution affects folding speed. We estimated folding times for the present-day catalog of protein domains directly from their size-modified contact order. These values were mapped onto an evolutionary timeline of domain appearance derived from a phylogenomic analysis of protein domains in 989 fully-sequenced genomes. Our results show a clear overall increase of folding speed during evolution, with known ultra-fast downhill folders appearing rather late in the timeline. Remarkably, folding optimization depends on secondary structure. While alpha-folds showed a tendency to fold faster throughout evolution, beta-folds exhibited a trend of folding time increase during the last [Formula: see text]1.5 billion years that began during the "big bang" of domain combinations. As a consequence, these domain structures are on average slow folders today. Our results suggest that fast and efficient folding of domains shaped the universe of protein structure. This finding supports the hypothesis that optimization of the kinetic and thermodynamic accessibility of the native fold reduces protein aggregation propensities that hamper cellular functions. PMID:23341762
Optimal cue integration in ants.
Wystrach, Antoine; Mangan, Michael; Webb, Barbara
2015-10-01
In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy.
Optimal Defaults and Active Decisions*
Carroll, Gabriel D.; Choi, James J.; Laibson, David; Madrian, Brigitte C.; Metrick, Andrew
2009-01-01
Defaults often have a large influence on consumer decisions. We identify an overlooked but practical alternative to defaults: requiring individuals to make an explicit choice for themselves. We study such “active decisions” in the context of 401(k) saving. We find that compelling new hires to make active decisions about 401(k) enrollment raises the initial fraction that enroll by 28 percentage points relative to a standard opt-in enrollment procedure, producing a savings distribution three months after hire that would take 30 months to achieve under standard enrollment. We also present a model of 401(k) enrollment and derive conditions under which the optimal enrollment regime is automatic enrollment (i.e., default enrollment), standard enrollment (i.e., default non-enrollment), or active decisions (i.e., no default and compulsory choice). Active decisions are optimal when consumers have a strong propensity to procrastinate and savings preferences are highly heterogeneous. Financial illiteracy, however, favors default enrollment over active decision enrollment. PMID:20041043
Optimal Defaults and Active Decisions.
Carroll, Gabriel D; Choi, James J; Laibson, David; Madrian, Brigitte C; Metrick, Andrew
2009-11-01
Defaults often have a large influence on consumer decisions. We identify an overlooked but practical alternative to defaults: requiring individuals to make an explicit choice for themselves. We study such "active decisions" in the context of 401(k) saving. We find that compelling new hires to make active decisions about 401(k) enrollment raises the initial fraction that enroll by 28 percentage points relative to a standard opt-in enrollment procedure, producing a savings distribution three months after hire that would take 30 months to achieve under standard enrollment. We also present a model of 401(k) enrollment and derive conditions under which the optimal enrollment regime is automatic enrollment (i.e., default enrollment), standard enrollment (i.e., default non-enrollment), or active decisions (i.e., no default and compulsory choice). Active decisions are optimal when consumers have a strong propensity to procrastinate and savings preferences are highly heterogeneous. Financial illiteracy, however, favors default enrollment over active decision enrollment.
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
Optimal recovery of local truth
NASA Astrophysics Data System (ADS)
Rodriguez, C. C.
2001-05-01
Probability mass curves the data space with horizons!. Let f be a multivariate probability density function with continuous second order partial derivatives. Consider the problem of estimating the true value of f(z)>0 at a single point z, from n independent observations. It is shown that, the fastest possible estimators (like the k-nearest neighbor and kernel) have minimum asymptotic mean-square errors when the space of observations is thought as conformally curved. The optimal metric is shown to be generated by the Hessian of f in the regions where the Hessian is definite. Thus, the peaks and valleys of f are surrounded by singular horizons when the Hessian changes signature from Riemannian to pseudo-Riemannian. Adaptive estimators based on the optimal variable metric show considerable theoretical and practical improvements over traditional methods. The formulas simplify dramatically when the dimension of the data space is 4. The similarities with General Relativity are striking but possibly illusory at this point. However, these results suggest that nonparametric density estimation may have something new to say about current physical theory.
Optimal Appearance Model for Visual Tracking.
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639
Optimal design of compact spur gear reductions
NASA Technical Reports Server (NTRS)
Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.
1992-01-01
The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.
Optimal Appearance Model for Visual Tracking.
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models.
A survey of compiler optimization techniques
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1972-01-01
Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
Li, Zukui; Ding, Ran; Floudas, Christodoulos A
2011-09-21
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented.
Trajectory optimization for the National Aerospace Plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1993-01-01
The objective of this second phase research is to investigate the optimal ascent trajectory for the National Aerospace Plane (NASP) from runway take-off to orbital insertion and address the unique problems associated with the hypersonic flight trajectory optimization. The trajectory optimization problem for an aerospace plane is a highly challenging problem because of the complexity involved. Previous work has been successful in obtaining sub-optimal trajectories by using energy-state approximation and time-scale decomposition techniques. But it is known that the energy-state approximation is not valid in certain portions of the trajectory. This research aims at employing full dynamics of the aerospace plane and emphasizing direct trajectory optimization methods. The major accomplishments of this research include the first-time development of an inverse dynamics approach in trajectory optimization which enables us to generate optimal trajectories for the aerospace plane efficiently and reliably, and general analytical solutions to constrained hypersonic trajectories that has wide application in trajectory optimization as well as in guidance and flight dynamics. Optimal trajectories in abort landing and ascent augmented with rocket propulsion and thrust vectoring control were also investigated. Motivated by this study, a new global trajectory optimization tool using continuous simulated annealing and a nonlinear predictive feedback guidance law have been under investigation and some promising results have been obtained, which may well lead to more significant development and application in the near future.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES
S. BOETTCHER; A. PERCUS
2000-08-01
We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only one adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.
On Optimal Bilinear Quadrilateral Meshes
D'Azevedo, E.
1998-10-26
The novelty of this work is in presenting interesting error properties of two types of asymptotically optimal quadrilateral meshes for bilinear approximation. The first type of mesh has an error equidistributing property where the maximum interpolation error is asymptotically the same over all elements. The second type has faster than expected super-convergence property for certain saddle-shaped data functions. The super-convergent mesh may be an order of magnitude more accurate than the error equidistributing mesh. Both types of mesh are generated by a coordinate transformation of a regular mesh of squares. The coordinate transformation is derived by interpreting the Hessian matrix of a data function as a metric tensor. The insights in this work may have application in mesh design near known corner or point singularities.
Hybrid Optimization Parallel Search PACKage
2009-11-10
HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework providesmore » a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less
Optimal screening for genetic diseases.
Nævdal, Eric
2014-12-01
Screening for genetic diseases is performed in many regions and/or ethnic groups where there is a high prevalence of possibly malign genes. The propagation of such genes can be considered a dynamic externality. Given that many of these diseases are untreatable and give rise to truly tragic outcomes, they are a source of societal concern, and the screening process should perhaps be regulated. This paper incorporates a standard model of genetic propagation into an economic model of dynamic management to derive cost benefit rules for optimal screening. The highly non-linear nature of genetic dynamics gives rise to perhaps surprising results that include discontinuous controls and threshold effects. One insight is that any screening program that is in place for any amount of time should screen all individuals in a target population. The incorporation of genetic models may prove to be useful to several emerging fields in economics such as genoeconomics, neuroeconomics and paleoeconomics.
THE OPTIMAL GRAVITATIONAL LENS TELESCOPE
Surdej, J.; Hanot, C.; Sadibekova, T.; Delacroix, C.; Habraken, S.; Coleman, P.; Dominik, M.; Le Coroller, H.; Mawet, D.; Quintana, H.; Sluse, D.
2010-05-15
Given an observed gravitational lens mirage produced by a foreground deflector (cf. galaxy, quasar, cluster, ...), it is possible via numerical lens inversion to retrieve the real source image, taking full advantage of the magnifying power of the cosmic lens. This has been achieved in the past for several remarkable gravitational lens systems. Instead, we propose here to invert an observed multiply imaged source directly at the telescope using an ad hoc optical instrument which is described in the present paper. Compared to the previous method, this should allow one to detect fainter source features as well as to use such an optimal gravitational lens telescope to explore even fainter objects located behind and near the lens. Laboratory and numerical experiments illustrate this new approach.
Optimal screening for genetic diseases.
Nævdal, Eric
2014-12-01
Screening for genetic diseases is performed in many regions and/or ethnic groups where there is a high prevalence of possibly malign genes. The propagation of such genes can be considered a dynamic externality. Given that many of these diseases are untreatable and give rise to truly tragic outcomes, they are a source of societal concern, and the screening process should perhaps be regulated. This paper incorporates a standard model of genetic propagation into an economic model of dynamic management to derive cost benefit rules for optimal screening. The highly non-linear nature of genetic dynamics gives rise to perhaps surprising results that include discontinuous controls and threshold effects. One insight is that any screening program that is in place for any amount of time should screen all individuals in a target population. The incorporation of genetic models may prove to be useful to several emerging fields in economics such as genoeconomics, neuroeconomics and paleoeconomics. PMID:25203815
HCCI Engine Optimization and Control
Rolf D. Reitz
2005-09-30
The goal of this project was to develop methods to optimize and control Homogeneous-Charge Compression Ignition (HCCI) engines, with emphasis on diesel-fueled engines. HCCI offers the potential of nearly eliminating IC engine NOx and particulate emissions at reduced cost over Compression Ignition Direct Injection engines (CIDI) by controlling pollutant emissions in-cylinder. The project was initiated in January, 2002, and the present report is the final report for work conducted on the project through December 31, 2004. Periodic progress has also been reported at bi-annual working group meetings held at USCAR, Detroit, MI, and at the Sandia National Laboratories. Copies of these presentation materials are available on CD-ROM, as distributed by the Sandia National Labs. In addition, progress has been documented in DOE Advanced Combustion Engine R&D Annual Progress Reports for FY 2002, 2003 and 2004. These reports are included as the Appendices in this Final report.
Mission analysis flow sequencing optimization
NASA Technical Reports Server (NTRS)
Scott, M.
1986-01-01
This investigation is an extension of a project dealing with the problem of optimal use of ground resources for future space missions. This problem was formulated as a linear programming problem using an indirect approach. Instead of minimizing the inventory level of needed ground resources, the overlapping periods during which the same types of resources are used by various flights are minimized. The model was built upon the assumption that during the time interval under consideration, the costs of various needed resources remain constant. Under other assumptions concerning costs of resources, the objective function, in general, assumes a non-linear form. In this study, one case where the form of the objective function turns out to be quadratic is considered. Also, disadvantages and limitations of the approach used are briefly discussed.
Optimal swimming of model ciliates
NASA Astrophysics Data System (ADS)
Michelin, Sebastien; Lauga, Eric
2010-11-01
In order to swim at low Reynolds numbers, microorganisms must undergo non-time-reversible shape changes. In ciliary locomotion, this symmetry breaking is achieved through the actuation of many flexible cilia distributed on the surface of the organism. Experimental studies have demonstrated the collective synchronization of neighboring cilia (metachronal waves), whose exact origin is still debated. Here we consider the hydrodynamic energetic cost of ciliary locomotion and consider an axisymmetric envelope model with prescribed tangential surface displacements. We show that the periodic strokes of this model ciliated swimmer that minimize the energy dissipation in the surrounding fluid achieve symmetry-breaking at the organism level through the propagation of wave patterns similar to metachronal waves. We analyze the properties of the optimal strokes, in particular the impact on the swimming performance introduced by a restriction on maximum cilia tip displacement due to the finite cilia length.
Optimal monitoring of computer networks
Fedorov, V.V.; Flanagan, D.
1997-08-01
The authors apply the ideas from optimal design theory to the very specific area of monitoring large computer networks. The behavior of these networks is so complex and uncertain that it is quite natural to use the statistical methods of experimental design which were originated in such areas as biology, behavioral sciences and agriculture, where the random character of phenomena is a crucial component and systems are too complicated to be described by some sophisticated deterministic models. They want to emphasize that only the first steps have been completed, and relatively simple underlying concepts about network functions have been used. Their immediate goal is to initiate studies focused on developing efficient experimental design techniques which can be used by practitioners working with large networks operating and evolving in a random environment.
Low thrust optimal orbital transfers
NASA Technical Reports Server (NTRS)
Cobb, Shannon S.
1994-01-01
For many optimal transfer problems it is reasonable to expect that the minimum time solution is also the minimum fuel solution. However, if one allows the propulsion system to be turned off and back on, it is clear that these two solutions may differ. In general, high thrust transfers resemble the well known impulsive transfers where the burn arcs are of very short duration. The low and medium thrust transfers differ in that their thrust acceleration levels yield longer burn arcs and thus will require more revolutions. In this research, we considered two approaches for solving this problem: a powered flight guidance algorithm previously developed for higher thrust transfers was modified and an 'averaging technique' was investigated.
Optimal strategies for familial searching.
Kruijver, Maarten; Meester, Ronald; Slooten, Klaas
2014-11-01
Familial searching is the process of finding potential relatives of the donor of a crime scene profile in a DNA database. Several authors have proposed strategies for generating candidate lists of potential relatives. This paper reviews four strategies and investigates theoretical properties as well as empirical behavior, using a comprehensive simulation study on mock databases. The effectiveness of a familial search is shown to highly depend on the case profile as well as on the tuning parameters. We give recommendations for proceeding in an optimal way and on how to choose tuning parameters both in general and on a case-by-case basis. Additionally we treat searching heterogeneous databases (not all profiles comprise the same loci) and composite searching for multiple types of kinship. An R-package for reproducing results in a particular case is released to help decision-making in familial searching.
Optimizing SNAP for Weak Lensing
NASA Astrophysics Data System (ADS)
High, F. W.; Ellis, R. S.; Massey, R. J.; Rhodes, J. D.; Lamoureux, J. I.; SNAP Collaboration
2004-12-01
The Supernova/Acceleration Probe (SNAP) satellite proposes to measure weak gravitational lensing in addition to type Ia supernovae. Its pixel scale has been set to 0.10 arcsec per pixel as established by the needs of supernova observations. To find the optimal pixel scale for accurate weak lensing measurements we conduct a tradeoff study in which, via simulations, we fix the suvey size in total pixels and vary the pixel scale. Our preliminary results show that with a smaller scale of about 0.08 arcsec per pixel we can minimize the contribution of intrinsic shear variance to the error on the power spectrum of mass density distortion. Currently we are testing the robustness of this figure as well as determining whether dithering yields analogous results.
Mode tracking issues in optimization
NASA Astrophysics Data System (ADS)
Eldred, M. S.; Venkayya, V. B.; Anderson, W. J.
1993-04-01
Methodology for the tracking of eigenpairs during perturbations in the eigenvalue problem, for both self-adjoint and nonself-adjoint cases is presented. This methodology based on mode tracking techniques is considered to be an important bookkeeping tool which enables the analyst to maintain proper identification of modal data, thus avoiding confusion caused by mode switching. It is shown that, in optimization with frequency constraints, higher order eigenpair perturbations (HOEP) and the cross-orthogonality check (CORC) are effective in eliminating convergence problems caused by mode switching. In V-g flutter analysis, C-HOEP is found to be more robust than C-CORC. C-HOEP is capable of succesfully tracking modes near flutter despite mode shape similarity.
Actuator-valve interface optimization
Burchett, O.L.; Jones, R.L.
1986-01-01
A computer code, Actuator Valve Response (AVR), has been developed to optimize the explosive actuator-valve interface parameters so that the valve plunger velocity is at a maximum when the plunger reaches the valve tubes. The code considers three forces to act on the valve plunger before the plunger reaches the valve tubes. These are the pressure force produced by the actuator, the shear force necessary to shear the seal disks on the actuator and the valve plunger, and the friction force caused by friction between the plunger and the plunger bore. The three forces are modeled by expressions that are explicitly functions of the plunger displacement. A particular actuator-valve combination was analyzed with the computer code AVR with four different combinations of valve plunger seal disk shear strength and initial friction force. (LEW)
Demonstration of integrated optimization software
2008-01-01
NeuCO has designed and demonstrated the integration of five system control modules using its proprietary ProcessLink{reg_sign} technology of neural networks, advanced algorithms and fuzzy logic to maximize performance of coal-fired plants. The separate modules control cyclone combustion, sootblowing, SCR operations, performance and equipment maintenance. ProcessLink{reg_sign} provides overall plant-level integration of controls responsive to plant operator and corporate criteria. Benefits of an integrated approach include NOx reduction improvement in heat rate, availability, efficiency and reliability; extension of SCR catalyst life; and reduced consumption of ammonia. All translate into cost savings. As plant complexity increases through retrofit, repowering or other plant modifications, this integrated process optimization approach will be an important tool for plant operators. 1 fig., 1 photo.
Optimality theory in phonological acquisition.
Barlow, J A; Gierut, J A
1999-12-01
This tutorial presents an introduction to the contemporary linguistic framework known as optimality theory (OT). The basic assumptions of this constraint-based theory as a general model of grammar are first outlined, with formal notation being defined and illustrated. Concepts unique to the theory, including "emergence of the unmarked," are also described. OT is then examined more specifically within the context of phonological acquisition. The theory is applied in descriptions of children's common error patterns, observed inter- and intrachild variation, and productive change over time. The particular error patterns of fronting, stopping, final-consonant deletion, and cluster simplification are considered from an OT perspective. The discussion concludes with potential clinical applications and extensions of the theory to the diagnosis and treatment of children with functional phonological disorders.
Optimize production with online measurements
Mehdizadeh, P.
1999-11-01
Multiphase (MP) meters measure the flow of mixed oil, water and gas streams without separating the phases. In the past five years many operators have installed MP meters in selected fields to gain operating experience. As operational experience accumulated, the technology used in MP meters has matured, resulting in increasing acceptance of this technology to replace conventional test separators. But something more than acceptance has occurred. These pilot installations have led operators to another and potentially major benefit of MP measurement--using the measurements of the flow conditions by the MP meter to optimize production. This has the potential to make major changes to the way operators control and regulate well streams. The paper describes development and installation and application trends.
Eigenvectors of optimal color spectra.
Flinkman, Mika; Laamanen, Hannu; Tuomela, Jukka; Vahimaa, Pasi; Hauta-Kasari, Markku
2013-09-01
Principal component analysis (PCA) and weighted PCA were applied to spectra of optimal colors belonging to the outer surface of the object-color solid or to so-called MacAdam limits. The correlation matrix formed from this data is a circulant matrix whose biggest eigenvalue is simple and the corresponding eigenvector is constant. All other eigenvalues are double, and the eigenvectors can be expressed with trigonometric functions. Found trigonometric functions can be used as a general basis to reconstruct all possible smooth reflectance spectra. When the spectral data are weighted with an appropriate weight function, the essential part of the color information is compressed to the first three components and the shapes of the first three eigenvectors correspond to one achromatic response function and to two chromatic response functions, the latter corresponding approximately to Munsell opponent-hue directions 9YR-9B and 2BG-2R.
Optimal census by quorum sensing
NASA Astrophysics Data System (ADS)
Taillefumier, Thibaud
Bacteria regulate their gene expression in response to changes in local cell density in a process called quorum sensing. To synchronize their gene-expression programs, these bacteria need to glean as much information as possible about local density. Our study is the first to physically model the flow of information in a quorum-sensing microbial community, wherein the internal regulator of the individual's response tracks the external cell density via an endogenously generated shared signal. Combining information theory and Lagrangian optimization, we find that quorum-sensing systems can improve their information capabilities by tuning circuit feedbacks. At the population level, external feedback adjusts the dynamic range of the shared input to individuals' detection channels. At the individual level, internal feedback adjusts the regulator's response time to dynamically balance output noise reduction and signal tracking ability. Our analysis suggests that achieving information benefit via feedback requires dedicated systems to control gene expression noise, such as sRNA-based regulation.
Environmental Statistics and Optimal Regulation
2014-01-01
Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies–such as constitutive expression or graded response–for regulating protein levels in response to environmental inputs. We propose a general framework–here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient–to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones. PMID:25254493
Optimization program for offshore network
Oster, J.P.
1983-08-01
The Upper Zakum field, 50 km (31 miles) long and 30 km (19 miles) wide, is located offshore the United Arab Emirates, in latitude 25/sup 0/ North and in longitude 53/sup 0/30' East, approximately. The field facilities include one central production platform (central complex), three satellite platforms for primary separation, about 60 drilling or injection platforms, and one exporting terminal located on Zirku Island. Drilling and injection platforms are connected to the satellite platforms through 134 flowlines ranging from 4 to 12 in. in diameter. Connection between each satellite platform and the central complex is ensured through three special-purpose trunklines designed to convey crude oil and gas from satellites to a central complex, and injection water from the central complex to satellites. The trunklines are either 18 or 24 in. in diameter. Crude oil is dispatched from the central complex to Zirku Island through a 42-in. main oil line. This paper describes a program designed to optimize the pipelaying procedures. Since May 1980, the program has been run in the field office on a Hewlett Packard 9845 T computer. Once the yearly overall pipelaying schedule was determined, the program was run weekly to optimize the next 10 lines representing about 2 months of lay barge work. Such optimum monitoring and record keeping of the entire construction effort from the pipe mill to the riser installation would not have been possible without this management tool which certainly saved considerable barge transfer and stand-by times, resulting in an important reduction of the overall investment.
Foraging optimally for home ranges
Mitchell, Michael S.; Powell, Roger A.
2012-01-01
Economic models predict behavior of animals based on the presumption that natural selection has shaped behaviors important to an animal's fitness to maximize benefits over costs. Economic analyses have shown that territories of animals are structured by trade-offs between benefits gained from resources and costs of defending them. Intuitively, home ranges should be similarly structured, but trade-offs are difficult to assess because there are no costs of defense, thus economic models of home-range behavior are rare. We present economic models that predict how home ranges can be efficient with respect to spatially distributed resources, discounted for travel costs, under 2 strategies of optimization, resource maximization and area minimization. We show how constraints such as competitors can influence structure of homes ranges through resource depression, ultimately structuring density of animals within a population and their distribution on a landscape. We present simulations based on these models to show how they can be generally predictive of home-range behavior and the mechanisms that structure the spatial distribution of animals. We also show how contiguous home ranges estimated statistically from location data can be misleading for animals that optimize home ranges on landscapes with patchily distributed resources. We conclude with a summary of how we applied our models to nonterritorial black bears (Ursus americanus) living in the mountains of North Carolina, where we found their home ranges were best predicted by an area-minimization strategy constrained by intraspecific competition within a social hierarchy. Economic models can provide strong inference about home-range behavior and the resources that structure home ranges by offering falsifiable, a priori hypotheses that can be tested with field observations.
Optimal Conservation of Migratory Species
Martin, Tara G.; Chadès, Iadine; Arcese, Peter; Marra, Peter P.; Possingham, Hugh P.; Norris, D. Ryan
2007-01-01
Background Migratory animals comprise a significant portion of biodiversity worldwide with annual investment for their conservation exceeding several billion dollars. Designing effective conservation plans presents enormous challenges. Migratory species are influenced by multiple events across land and sea–regions that are often separated by thousands of kilometres and span international borders. To date, conservation strategies for migratory species fail to take into account how migratory animals are spatially connected between different periods of the annual cycle (i.e. migratory connectivity) bringing into question the utility and efficiency of current conservation efforts. Methodology/Principal Findings Here, we report the first framework for determining an optimal conservation strategy for a migratory species. Employing a decision theoretic approach using dynamic optimization, we address the problem of how to allocate resources for habitat conservation for a Neotropical-Nearctic migratory bird, the American redstart Setophaga ruticilla, whose winter habitat is under threat. Our first conservation strategy used the acquisition of winter habitat based on land cost, relative bird density, and the rate of habitat loss to maximize the abundance of birds on the wintering grounds. Our second strategy maximized bird abundance across the entire range of the species by adding the constraint of maintaining a minimum percentage of birds within each breeding region in North America using information on migratory connectivity as estimated from stable-hydrogen isotopes in feathers. We show that failure to take into account migratory connectivity may doom some regional populations to extinction, whereas including information on migratory connectivity results in the protection of the species across its entire range. Conclusions/Significance We demonstrate that conservation strategies for migratory animals depend critically upon two factors: knowledge of migratory
Perry, M.E.
1995-01-01
An Environmental Assessment and associated documentation is reported for the construction of an office building and parking lot in support of environmental management personnel activities. As part of the documentation process, the DOE determined that the proposed project constituted an undertaking as defined in Section 106 of the National Historic Preservation Act. In accordance with the regulations implementing Section 106 of the National Historic Preservation Act, a records and literature search and historic resource identification effort were carried out on behalf of the Stanford Linear Accelerator Center (SLAC). This report summarizes cultural resource literature and record searches and a historic resource identification effort.
Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2002-01-01
The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.
Gschwind, Michael K
2013-07-23
Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.
Translator for Optimizing Fluid-Handling Components
NASA Technical Reports Server (NTRS)
Landon, Mark; Perry, Ernest
2007-01-01
A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.
An overview of the optimization modelling applications
NASA Astrophysics Data System (ADS)
Singh, Ajay
2012-10-01
SummaryThe optimal use of available resources is of paramount importance in the backdrop of the increasing food, fiber, and other demands of the burgeoning global population and the shrinking resources. The optimal use of these resources can be determined by employing an optimization technique. The comprehensive reviews on the use of various programming techniques for the solution of different optimization problems have been provided in this paper. The past reviews are grouped into nine sections based on the solutions of the theme-based real world problems. The sections include: use of optimization modelling for conjunctive use planning, groundwater management, seawater intrusion management, irrigation management, achieving optimal cropping pattern, management of reservoir systems operation, management of resources in arid and semi-arid regions, solid waste management, and miscellaneous uses which comprise, managing problems of hydropower generation and sugar industry. Conclusions are drawn where gaps exist and more research needs to be focused.
Tractable Pareto Optimization of Temporal Preferences
NASA Technical Reports Server (NTRS)
Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent
2003-01-01
This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.
Aerospace Applications of Optimization under Uncertainty
NASA Technical Reports Server (NTRS)
Padula, Sharon; Gumbert, Clyde; Li, Wu
2006-01-01
The Multidisciplinary Optimization (MDO) Branch at NASA Langley Research Center develops new methods and investigates opportunities for applying optimization to aerospace vehicle design. This paper describes MDO Branch experiences with three applications of optimization under uncertainty: (1) improved impact dynamics for airframes, (2) transonic airfoil optimization for low drag, and (3) coupled aerodynamic/structures optimization of a 3-D wing. For each case, a brief overview of the problem and references to previous publications are provided. The three cases are aerospace examples of the challenges and opportunities presented by optimization under uncertainty. The present paper will illustrate a variety of needs for this technology, summarize promising methods, and uncover fruitful areas for new research.
Topology Optimization for Architected Materials Design
NASA Astrophysics Data System (ADS)
Osanov, Mikhail; Guest, James K.
2016-07-01
Advanced manufacturing processes provide a tremendous opportunity to fabricate materials with precisely defined architectures. To fully leverage these capabilities, however, materials architectures must be optimally designed according to the target application, base material used, and specifics of the fabrication process. Computational topology optimization offers a systematic, mathematically driven framework for navigating this new design challenge. The design problem is posed and solved formally as an optimization problem with unit cell and upscaling mechanics embedded within this formulation. This article briefly reviews the key requirements to apply topology optimization to materials architecture design and discusses several fundamental findings related to optimization of elastic, thermal, and fluidic properties in periodic materials. Emerging areas related to topology optimization for manufacturability and manufacturing variations, nonlinear mechanics, and multiscale design are also discussed.
Aerodynamic optimization studies on advanced architecture computers
NASA Technical Reports Server (NTRS)
Chawla, Kalpana
1995-01-01
The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.