Science.gov

Sample records for driveway mean-variance optimization

  1. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  2. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    SciTech Connect

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  3. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  4. PET image reconstruction: mean, variance, and optimal minimax criterion

    NASA Astrophysics Data System (ADS)

    Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng

    2015-04-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.

  5. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    NASA Astrophysics Data System (ADS)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  6. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  7. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    PubMed

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  8. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    PubMed

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  9. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment

    PubMed Central

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-01-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in Supplementary materials. The application of these files can be generalized to variety of communities interested in investing on PV systems. PMID:26937458

  10. Conversations across Meaning Variance

    ERIC Educational Resources Information Center

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  11. Formulation and demonstration of a robust mean variance optimization approach for concurrent airline network and aircraft design

    NASA Astrophysics Data System (ADS)

    Davendralingam, Navindran

    Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is

  12. The cost of geothermal energy in the western US region:a portfolio-based approach a mean-variance portfolio optimization of the regions' generating mix to 2013.

    SciTech Connect

    Beurskens, Luuk (ECN-Energy Research Centre of the Netherland); Jansen, Jaap C. (ECN-Energy Research Centre of the Netherlands); Awerbuch, Shimon Ph.D. (.University of Sussex, Brighton, UK); Drennen, Thomas E.

    2005-09-01

    Energy planning represents an investment-decision problem. Investors commonly evaluate such problems using portfolio theory to manage risk and maximize portfolio performance under a variety of unpredictable economic outcomes. Energy planners need to similarly abandon their reliance on traditional, ''least-cost'' stand-alone technology cost estimates and instead evaluate conventional and renewable energy sources on the basis of their portfolio cost--their cost contribution relative to their risk contribution to a mix of generating assets. This report describes essential portfolio-theory ideas and discusses their application in the Western US region. The memo illustrates how electricity-generating mixes can benefit from additional shares of geothermal and other renewables. Compared to fossil-dominated mixes, efficient portfolios reduce generating cost while including greater renewables shares in the mix. This enhances energy security. Though counter-intuitive, the idea that adding more costly geothermal can actually reduce portfolio-generating cost is consistent with basic finance theory. An important implication is that in dynamic and uncertain environments, the relative value of generating technologies must be determined not by evaluating alternative resources, but by evaluating alternative resource portfolios. The optimal results for the Western US Region indicate that compared to the EIA target mixes, there exist generating mixes with larger geothermal shares at equal-or-lower expected cost and risk.

  13. Mean-variance portfolio selection for defined-contribution pension funds with stochastic salary.

    PubMed

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier.

  14. Mean-Variance Portfolio Selection for Defined-Contribution Pension Funds with Stochastic Salary

    PubMed Central

    Zhang, Chubing

    2014-01-01

    This paper focuses on a continuous-time dynamic mean-variance portfolio selection problem of defined-contribution pension funds with stochastic salary, whose risk comes from both financial market and nonfinancial market. By constructing a special Riccati equation as a continuous (actually a viscosity) solution to the HJB equation, we obtain an explicit closed form solution for the optimal investment portfolio as well as the efficient frontier. PMID:24782667

  15. 9 CFR 313.1 - Livestock pens, driveways and ramps.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Livestock pens, driveways and ramps. 313.1 Section 313.1 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... INSPECTION AND CERTIFICATION HUMANE SLAUGHTER OF LIVESTOCK § 313.1 Livestock pens, driveways and ramps....

  16. 9 CFR 313.1 - Livestock pens, driveways and ramps.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Livestock pens, driveways and ramps. 313.1 Section 313.1 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... INSPECTION AND CERTIFICATION HUMANE SLAUGHTER OF LIVESTOCK § 313.1 Livestock pens, driveways and ramps....

  17. 9 CFR 313.1 - Livestock pens, driveways and ramps.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Livestock pens, driveways and ramps. 313.1 Section 313.1 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF... INSPECTION AND CERTIFICATION HUMANE SLAUGHTER OF LIVESTOCK § 313.1 Livestock pens, driveways and ramps....

  18. 43 CFR 3815.7 - Mining claims subject to stock driveway withdrawals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Mining claims subject to stock driveway... SUBJECT TO LOCATION Mineral Locations in Stock Driveway Withdrawals § 3815.7 Mining claims subject to stock driveway withdrawals. Mining claims on lands within stock driveway withdrawals, located prior...

  19. 43 CFR 3815.7 - Mining claims subject to stock driveway withdrawals.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Mining claims subject to stock driveway... SUBJECT TO LOCATION Mineral Locations in Stock Driveway Withdrawals § 3815.7 Mining claims subject to stock driveway withdrawals. Mining claims on lands within stock driveway withdrawals, located prior...

  20. 7. View of south court and driveway toward main entrance; ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. View of south court and driveway toward main entrance; and parts of north and south wings of main building; facing east. - Mission Motel, South Court, 9235 MacArthur Boulevard, Oakland, Alameda County, CA

  1. 3. View west from Benjamin Carr Farm driveway toward barn, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. View west from Benjamin Carr Farm driveway toward barn, Benjamin Carr house to the south (left), Eldred Avenue to the north (right). - Benjamin Carr Farm, Route 138 (Eldred Avenue) & Helm Street, Jamestown, Newport County, RI

  2. Photocopy of original blackandwhite silver gelatin print, TWELFTH STREET DRIVEWAY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of original black-and-white silver gelatin print, TWELFTH STREET DRIVEWAY ENTRANCE, August 31, 1929, photographer Commercial Photo Company - Internal Revenue Service Headquarters Building, 1111 Constitution Avenue Northwest, Washington, District of Columbia, DC

  3. 7. ELEVATION OF STREET (NORTH) FACADE FROM DRIVEWAY OF LOWELL'S ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. ELEVATION OF STREET (NORTH) FACADE FROM DRIVEWAY OF LOWELL'S FORMER RESIDENCE. NOTE BUILDERS VERTICALLY ALIGNED STEM OF BOATS WITH CORNER OF HOUSE BEHIND CAMERA POSITION. - Lowell's Boat Shop, 459 Main Street, Amesbury, Essex County, MA

  4. 2. View from the mansion formal entrance driveway toward the ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. View from the mansion formal entrance driveway toward the big meadow at the Billings Farm & Museum. The driveway is flanked by granite gateposts surmounted by wrought iron urn lamps. The view includes a manicured hemlock hedge (Tsuga canadensis) retained by a stone wall at left, and white birch (Betula species) under-planted with ferns at center. - Marsh-Billings-Rockefeller National Historical Park, 54 Elm Street, Woodstock, Windsor County, VT

  5. Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Ishii, Hiroaki

    2009-01-01

    This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.

  6. 9 CFR 313.1 - Livestock pens, driveways and ramps.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... animal may be injured shall be repaired. (b) Floors of livestock pens, ramps, and driveways shall be constructed and maintained so as to provide good footing for livestock. Slip resistant or waffled floor... the opinion of the inspector, to protect them from the adverse climatic conditions of the locale...

  7. Full-Depth Asphalt Pavements for Parking Lots and Driveways.

    ERIC Educational Resources Information Center

    Asphalt Inst., College Park, MD.

    The latest information for designing full-depth asphalt pavements for parking lots and driveways is covered in relationship to the continued increase in vehicle registration. It is based on The Asphalt Institute's Thickness Design Manual, Series No. 1 (MS-1), Seventh Edition, which covers all aspects of asphalt pavement thickness design in detail,…

  8. FRONT ELEVATION, WITH DRIVEWAY ON LEFT HAND SIDE, AND STREET ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FRONT ELEVATION, WITH DRIVEWAY ON LEFT HAND SIDE, AND STREET IN FOREGROUND. VIEW FACING NORTHEAST - Camp H.M. Smith and Navy Public Works Center Manana Title VII (Capehart) Housing, Four-Bedroom, Single-Family Type 10, Birch Circle, Elm Drive, Elm Circle, and Date Drive, Pearl City, Honolulu County, HI

  9. 2. GENERAL VIEW: MAIN DRIVEWAY: CORD CABIN IS TO THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. GENERAL VIEW: MAIN DRIVEWAY: CORD CABIN IS TO THE RIGHT OF KIOSK THE FAGEOL CABIN IS IN THE BACKGROUND. - Camp Richardson Resort, Cord Cabin, U.S. Highway 89, 3 miles west of State Highway 50 & 89, South Lake Tahoe, El Dorado County, CA

  10. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model

    PubMed Central

    Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor’s behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign. PMID:27351482

  11. The Development and Preliminary Evaluation of an Education Intervention to Prevent Driveway Run-Over Incidents

    ERIC Educational Resources Information Center

    Armstrong, Kerry A.; Watling, Hanna; Davey, Jeremy

    2016-01-01

    Objective: While driveway run-over incidents continue to be a cause of serious injury and deaths among young children in Australia, few empirically evaluated educational interventions have been developed which address these incidents. Addressing this gap, this study describes the development and evaluation of a paper-based driveway safety…

  12. Research on regularized mean-variance portfolio selection strategy with modified Roy safety-first principle.

    PubMed

    Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan

    2016-01-01

    We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.

  13. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    PubMed

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  14. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model

    PubMed Central

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach. PMID:26225761

  15. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    PubMed

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  16. Impact of apolipoprotein E genotype variation on means, variances, and correlations of plasma lipid, lipoprotein, and apolipoprotein traits in octogenarians

    SciTech Connect

    Haviland, M.B.; Sing, C.F.; Lussier-Cacan, S.; Davignon, J.

    1995-09-25

    The impact of apolipoprotein (apo) E genotype variation on means, variances and correlations between plasma lipid traits was studied in male and female octogenarians. Females had significantly higher mean levels of all 10 of the measured plasma lipid traits than males. The subset of concomitants (i.e., age, height, weight, body mass index, glucose and uric acid) that made a statistically significant contribution to interindividual variability was different in males and females for every trait considered. Gender-specific associations between variation in apo E genotype and variation in particular measures of lipid metabolism, adjusted for concomitant variation, were observed: in females there were no statistically significant associations while in males the means of the three common apo E genotypes were significantly different for adjusted measures of total cholesterol, low density lipoprotein cholesterol and low density lipoprotein-apo B. The common apo E genotypes were heterogeneous with respect to intragenotypic variance for adjusted log-transformed triglyceride levels in females only. Finally, the three common apo E genotypes were heterogeneous with respect to the correlation between traits, adjusted for concomitant variation, and gender influenced the manner in which the genotypes differed for specific correlations. This study documents that variation in the apo E gene has a significant impact on means, variances and correlations of plasma lipid traits in octogenarians, but the effects are context-, that is, gender- and age-, dependent. 65 refs., 4 figs., 3 tabs.

  17. DRAWING R100132, FIELD OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DRAWING R-1001-32, FIELD OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, SOUTH CIRCLE, CASA GRANDE REAL, AND SEQUOIA DRIVES. Ink on linen, signed by H.B. Nurse. Date has been erased, but probably June 15, 1933. Also marked "PWC 104289." - Hamilton Field, East of Nave Drive, Novato, Marin County, CA

  18. DRAWING R100131, COMPANY OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DRAWING R-1001-31, COMPANY OFFICERS' AREA, BUILDING LOCATIONS, DRIVEWAYS, AND SIDEWALKS, LAS LOMAS AND BUENA VISTA DRIVES. Ink on linen, signed by H.B. Nurse. Date has been erased, but probably June 15, 1933. Also marked "PWC 104288." - Hamilton Field, East of Nave Drive, Novato, Marin County, CA

  19. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  20. 9 CFR 309.7 - Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...

  1. 9 CFR 309.7 - Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...

  2. 9 CFR 309.7 - Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...

  3. 9 CFR 309.7 - Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...

  4. 9 CFR 309.7 - Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Livestock affected with anthrax... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways. (a) Any livestock found on ante-mortem inspection to be affected with anthrax shall be...

  5. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...

  6. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...

  7. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...

  8. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...

  9. 9 CFR 355.15 - Inedible material operating and storage rooms; outer premises, docks, driveways, etc.; fly...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... storage rooms; outer premises, docks, driveways, etc.; fly-breeding material; nuisances. 355.15 Section...-breeding material; nuisances. All operating and storage rooms and departments of inspected plants used for... any material in which flies may breed, or the maintenance of any nuisance on the premises shall not...

  10. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  11. Algorithms for optimizing CT fluence control

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-03-01

    The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).

  12. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  13. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  14. Robust Portfolio Optimization Using Pseudodistances.

    PubMed

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  15. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    SciTech Connect

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-06-19

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  16. Belief Propagation Algorithm for Portfolio Optimization Problems.

    PubMed

    Shinzato, Takashi; Yasuda, Muneki

    2015-01-01

    The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.

  17. Inverse Optimization: A New Perspective on the Black-Litterman Model

    PubMed Central

    Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.

    2014-01-01

    The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873

  18. Turnover, account value and diversification of real traders: evidence of collective portfolio optimizing behavior

    NASA Astrophysics Data System (ADS)

    Morton de Lachapelle, David; Challet, Damien

    2010-07-01

    Despite the availability of very detailed data on financial markets, agent-based modeling is hindered by the lack of information about real trader behavior. This makes it impossible to validate agent-based models, which are thus reverse-engineering attempts. This work is a contribution towards building a set of stylized facts about the traders themselves. Using the client database of Swissquote Bank SA, the largest online Swiss broker, we find empirical relationships between turnover, account values and the number of assets in which a trader is invested. A theory based on simple mean-variance portfolio optimization that crucially includes variable transaction costs is able to reproduce faithfully the observed behaviors. We finally argue that our results bring to light the collective ability of a population to construct a mean-variance portfolio that takes into account the structure of transaction costs.

  19. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  20. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  1. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  2. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  3. Optimal Solar PV Arrays Integration for Distributed Generation

    SciTech Connect

    Omitaomu, Olufemi A; Li, Xueping

    2012-01-01

    Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introduce quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.

  4. Optimal trading strategies—a time series approach

    NASA Astrophysics Data System (ADS)

    Bebbington, Peter A.; Kühn, Reimer

    2016-05-01

    Motivated by recent advances in the spectral theory of auto-covariance matrices, we are led to revisit a reformulation of Markowitz’ mean-variance portfolio optimization approach in the time domain. In its simplest incarnation it applies to a single traded asset and allows an optimal trading strategy to be found which—for a given return—is minimally exposed to market price fluctuations. The model is initially investigated for a range of synthetic price processes, taken to be either second order stationary, or to exhibit second order stationary increments. Attention is paid to consequences of estimating auto-covariance matrices from small finite samples, and auto-covariance matrix cleaning strategies to mitigate against these are investigated. Finally we apply our framework to real world data.

  5. 9 CFR 313.1 - Livestock pens, driveways and ramps.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... protruding objects which may, in the opinion of the inspector, cause injury or pain to the animals. Loose boards, splintered or broken planking, and unnecessary openings where the head, feet, or legs of...

  6. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  7. Optimal portfolio strategy with cross-correlation matrix composed by DCCA coefficients: Evidence from the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Sun, Xuelian; Liu, Zixian

    2016-02-01

    In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.

  8. Optimizing Locomotion

    NASA Astrophysics Data System (ADS)

    Hosoi, Anette

    2006-11-01

    In this talk we will discuss two optimization topics related to low Reynolds number locomotion: optimal stroke patterns in linked swimmers and optimal fluid material properties in adhesive locomotion. In contrast to many optimization problems, we do not consider geometry, rather we optimize the swimming kinematics or fluid material properties for a given geometrical configuration. In the first case, we begin by optimizing stroke patterns for Purcell's 3-link swimmer. We model the swimmer as a jointed chain of three slender links moving in an inertialess flow. The swimmer is optimized for both efficiency and speed. In the second case, we analyze the adhesive locomotion used by common gastropods such as snails and slugs. Such organisms crawl on a solid substrate by propagating muscular waves of shear stress on a viscoelastic mucus. Using a simple mechanical model, we derive criteria for favorable fluid material properties to lower the energetic cost of locomotion.

  9. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  10. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  11. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  12. Prospective Optimization.

    PubMed

    Sejnowski, Terrence J; Poizner, Howard; Lynch, Gary; Gepshtein, Sergei; Greenspan, Ralph J

    2014-05-01

    Human performance approaches that of an ideal observer and optimal actor in some perceptual and motor tasks. These optimal abilities depend on the capacity of the cerebral cortex to store an immense amount of information and to flexibly make rapid decisions. However, behavior only approaches these limits after a long period of learning while the cerebral cortex interacts with the basal ganglia, an ancient part of the vertebrate brain that is responsible for learning sequences of actions directed toward achieving goals. Progress has been made in understanding the algorithms used by the brain during reinforcement learning, which is an online approximation of dynamic programming. Humans also make plans that depend on past experience by simulating different scenarios, which is called prospective optimization. The same brain structures in the cortex and basal ganglia that are active online during optimal behavior are also active offline during prospective optimization. The emergence of general principles and algorithms for goal-directed behavior has consequences for the development of autonomous devices in engineering applications.

  13. Prospective Optimization

    PubMed Central

    Sejnowski, Terrence J.; Poizner, Howard; Lynch, Gary; Gepshtein, Sergei; Greenspan, Ralph J.

    2014-01-01

    Human performance approaches that of an ideal observer and optimal actor in some perceptual and motor tasks. These optimal abilities depend on the capacity of the cerebral cortex to store an immense amount of information and to flexibly make rapid decisions. However, behavior only approaches these limits after a long period of learning while the cerebral cortex interacts with the basal ganglia, an ancient part of the vertebrate brain that is responsible for learning sequences of actions directed toward achieving goals. Progress has been made in understanding the algorithms used by the brain during reinforcement learning, which is an online approximation of dynamic programming. Humans also make plans that depend on past experience by simulating different scenarios, which is called prospective optimization. The same brain structures in the cortex and basal ganglia that are active online during optimal behavior are also active offline during prospective optimization. The emergence of general principles and algorithms for goal-directed behavior has consequences for the development of autonomous devices in engineering applications. PMID:25328167

  14. Optimal Fluoridation

    PubMed Central

    Lee, John R.

    1975-01-01

    Optimal fluoridation has been defined as that fluoride exposure which confers maximal cariostasis with minimal toxicity and its values have been previously determined to be 0.5 to 1 mg per day for infants and 1 to 1.5 mg per day for an average child. Total fluoride ingestion and urine excretion were studied in Marin County, California, children in 1973 before municipal water fluoridation. Results showed fluoride exposure to be higher than anticipated and fulfilled previously accepted criteria for optimal fluoridation. Present and future water fluoridation plans need to be reevaluated in light of total environmental fluoride exposure. PMID:1130041

  15. Gear optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian

    1988-01-01

    The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.

  16. Multidisciplinary optimization

    SciTech Connect

    Dennis, J.; Lewis, R.M.; Cramer, E.J.; Frank, P.M.; Shubin, G.R.

    1994-12-31

    This talk will use aeroelastic design and reservoir characterization as examples to introduce some approaches to MDO, or Multidisciplinary Optimization. This problem arises especially in engineering design, where it is considered of paramount importance in today`s competitive global business climate. It is interesting to an optimizer because the constraints involve coupled dissimilar systems of parameterized partial differential equations each arising from a different discipline, like structural analysis, computational fluid dynamics, etc. Usually, these constraints are accessible only through pde solvers rather than through algebraic residual calculations as we are used to having. Thus, just finding a multidisciplinary feasible point is a daunting task. Many such problems have discrete variable disciplines, multiple objectives, and other challenging features. After discussing some interesting practical features of the design problem, we will give some standard ways to formulate the problem as well as some novel ways that lend themselves to divide-and-conquer parallelism.

  17. [SIAM conference on optimization

    SciTech Connect

    Not Available

    1992-05-10

    Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.

  18. Optimal refrigerator

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Hovhannisyan, Karen; Mahler, Guenter

    2010-05-01

    We study a refrigerator model which consists of two n -level systems interacting via a pulsed external field. Each system couples to its own thermal bath at temperatures Th and Tc , respectively (θ≡Tc/Th<1) . The refrigerator functions in two steps: thermally isolated interaction between the systems driven by the external field and isothermal relaxation back to equilibrium. There is a complementarity between the power of heat transfer from the cold bath and the efficiency: the latter nullifies when the former is maximized and vice versa. A reasonable compromise is achieved by optimizing the product of the heat-power and efficiency over the Hamiltonian of the two systems. The efficiency is then found to be bounded from below by ζCA=(1)/(1-θ)-1 (an analog of the Curzon-Ahlborn efficiency), besides being bound from above by the Carnot efficiency ζC=(1)/(1-θ)-1 . The lower bound is reached in the equilibrium limit θ→1 . The Carnot bound is reached (for a finite power and a finite amount of heat transferred per cycle) for lnn≫1 . If the above maximization is constrained by assuming homogeneous energy spectra for both systems, the efficiency is bounded from above by ζCA and converges to it for n≫1 .

  19. RECOVERY ACT - Robust Optimization for Connectivity and Flows in Dynamic Complex Networks

    SciTech Connect

    Balasundaram, Balabhaskar; Butenko, Sergiy; Boginski, Vladimir; Uryasev, Stan

    2013-12-25

    to capture uncertainty and risk using appropriate probabilistic, statistical and optimization concepts. The main difficulty arising in addressing these issues is the dramatic increase in the computational complexity of the resulting optimization problems. This project studied novel models and methodologies for risk-averse network optimization- specifically, network design, network flows and cluster detection problems under uncertainty. The approach taken was to incorporate a quantitative risk measure known as conditional value-at-risk that is widely used in financial applications. This approach presents a viable alternate modeling and optimization framework to chance-constrained optimization and mean-variance optimization, one that also facilitates the detection of risk-averse solutions.

  20. Space-borne hyperspectral remote sensing imagery noise eliminating based on CFFT self-adapted by optimal SNR

    NASA Astrophysics Data System (ADS)

    Liu, Qingjie; Lin, Qizhong; Wang, Liming; Wang, Qinjun; Miao, Fengxian

    2010-09-01

    Space-borne hyperspectral remote sensing imagery, supplying both spatial and spectral information for quantitative remote sensing monitoring, is easily polluted by noises from atmosphere, terrain etc. Based on spectral continuum removing and recovering, traditional fast Fourier Transform (FFT) was extended to Continuum Fast Fourier Transform (CFFT) to separate noise from target information in frequency domain (FD). Thus, low-pass filter for reserving useful information was designed for eliminating noise, with its cut-off frequency selected self-adaptively by optimal signal-tonoise ratio (SNR). Hyperion hyperspectral imageries of Beijing and Xinjiang China were singled out for noise removing to validate the filtering ability of the Continuum Fast Fourier Transform self-adapted by Optimal Signal-noise Ratio(CFFTOSNR) method with qualitative description and quantificational indexs, including mean, variance, entropy, definition and SNR etc. Experiment result shows that CFFTOSNR does well in reducing the gauss white noises in spectral domain and stripe and band-subtracting noise in spatial domain respectively, while the quantificational indexs of filtered imagery are all improved, with entropy of post-processed image obviously increased by 5 db.

  1. Multiple Satellite Trajectory Optimization

    DTIC Science & Technology

    2004-12-01

    SOLVING OPTIMAL CONTROL PROBLEMS ........................................5...OPTIMIZATION A. SOLVING OPTIMAL CONTROL PROBLEMS The driving principle used to solve optimal control problems was first formalized by the Soviet...methods and processes of solving optimal control problems , this section will demonstrate how the formulations work as expected. Once coded, the

  2. Optimization of composite structures

    NASA Technical Reports Server (NTRS)

    Stroud, W. J.

    1982-01-01

    Structural optimization is introduced and examples which illustrate potential problems associated with optimized structures are presented. Optimized structures may have very low load carrying ability for an off design condition. They tend to have multiple modes of failure occurring simultaneously and can, therefore, be sensitive to imperfections. Because composite materials provide more design variables than do metals, they allow for more refined tailoring and more extensive optimization. As a result, optimized composite structures can be especially susceptible to these problems.

  3. Particle Swarm Optimization Toolbox

    NASA Technical Reports Server (NTRS)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry

  4. Aristos Optimization Package

    SciTech Connect

    Ridzal, Danis

    2007-03-01

    Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.

  5. Multidisciplinary Optimization for Aerospace Using Genetic Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.

    2007-01-01

    In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.

  6. Optimal probabilistic search

    SciTech Connect

    Lokutsievskiy, Lev V

    2011-05-31

    This paper is concerned with the optimal search of an object at rest with unknown exact position in the n-dimensional space. A necessary condition for optimality of a trajectory is obtained. An explicit form of a differential equation for an optimal trajectory is found while searching over R-strongly convex sets. An existence theorem is also established. Bibliography: 8 titles.

  7. Aircraft configuration optimization including optimized flight profiles

    NASA Technical Reports Server (NTRS)

    Mccullers, L. A.

    1984-01-01

    The Flight Optimization System (FLOPS) is an aircraft configuration optimization program developed for use in conceptual design of new aircraft and in the assessment of the impact of advanced technology. The modular makeup of the program is illustrated. It contains modules for preliminary weights estimation, preliminary aerodynamics, detailed mission performance, takeoff and landing, and execution control. An optimization module is used to drive the overall design and in defining optimized profiles in the mission performance. Propulsion data, usually received from engine manufacturers, are used in both the mission performance and the takeoff and landing analyses. Although executed as a single in-core program, the modules are stored separately so that the user may select the appropriate modules (e.g., fighter weights versus transport weights) or leave out modules that are not needed.

  8. Optimization of computations

    SciTech Connect

    Mikhalevich, V.S.; Sergienko, I.V.; Zadiraka, V.K.; Babich, M.D.

    1994-11-01

    This article examines some topics of optimization of computations, which have been discussed at 25 seminar-schools and symposia organized by the V.M. Glushkov Institute of Cybernetics of the Ukrainian Academy of Sciences since 1969. We describe the main directions in the development of computational mathematics and present some of our own results that reflect a certain design conception of speed-optimal and accuracy-optimal (or nearly optimal) algorithms for various classes of problems, as well as a certain approach to optimization of computer computations.

  9. Hope, optimism and delusion

    PubMed Central

    McGuire-Snieckus, Rebecca

    2014-01-01

    Optimism is generally accepted by psychiatrists, psychologists and other caring professionals as a feature of mental health. Interventions typically rely on cognitive-behavioural tools to encourage individuals to ‘stop negative thought cycles’ and to ‘challenge unhelpful thoughts’. However, evidence suggests that most individuals have persistent biases of optimism and that excessive optimism is not conducive to mental health. How helpful is it to facilitate optimism in individuals who are likely to exhibit biases of optimism already? By locating the cause of distress at the individual level and ‘unhelpful’ cognitions, does this minimise wider systemic social and economic influences on mental health? PMID:25237497

  10. Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

    2002-01-01

    The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

  11. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  12. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  13. Search-based optimization.

    PubMed

    Wheeler, Ward C

    2003-08-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis.

  14. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  15. Energy optimization system

    DOEpatents

    Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat

    2013-01-22

    A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.

  16. Optimization and optimal statistics in neuroscience

    NASA Astrophysics Data System (ADS)

    Brookings, Ted

    Complex systems have certain common properties, with power law statistics being nearly ubiquitous. Despite this commonality, we show that a variety of mechanisms can be responsible for complexity, illustrated by the example of a lattice on a Cayley Tree. Because of this, analysis must probe more deeply than merely looking for power laws, instead details of the dynamics must be examined. We show how optimality---a frequently-overlooked source of complexity---can produce typical features such as power laws, and describe inherent trade-offs in optimal systems, such as performance vs. robustness to rare disturbances. When applied to biological systems such as the nervous system, optimality is particularly appropriate because so many systems have identifiable purpose. We show that the "grid cells" in rats are extremely efficient in storing position information. Assuming the system to be optimal allows us to describe the number and organization of grid cells. By analyzing systems from an optimal perspective provides insights that permit description of features that would otherwise be difficult to observe. As well, careful analysis of complex systems requires diligent avoidance of assumptions that are unnecessary or unsupported. Attributing unwarranted meaning to ambiguous features, or assuming the existence of a priori constraints may quickly lead to faulty results. By eschewing unwarranted and unnecessary assumptions about the distribution of neural activity and instead carefully integrating information from EEG and fMRI, we are able to dramatically improve the quality of source-localization. Thus maintaining a watchful eye towards principles of optimality, while avoiding unnecessary statistical assumptions is an effective theoretical approach to neuroscience.

  17. Homotopy optimization methods for global optimization.

    SciTech Connect

    Dunlavy, Daniel M.; O'Leary, Dianne P. (University of Maryland, College Park, MD)

    2005-12-01

    We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.

  18. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  19. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  20. Elastic swimming I: Optimization

    NASA Astrophysics Data System (ADS)

    Lauga, Eric; Yu, Tony; Hosoi, Anette

    2006-03-01

    We consider the problem of swimming at low Reynolds number by oscillating an elastic filament in a viscous liquid, as investigated by Wiggins and Goldstein (1998, Phys Rev Lett). In this first part of the study, we characterize the optimal forcing conditions of the swimming strategy and its optimal geometrical characteristics.

  1. Optimal synchronization in space.

    PubMed

    Brede, Markus

    2010-02-01

    In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of "wire" available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l) proportional to l(-alpha), with exponents alpha increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.

  2. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

  3. Optimal synchronization in space

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-02-01

    In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.

  4. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  5. Optimal quantum pumps.

    PubMed

    Avron, J E; Elgart, A; Graf, G M; Sadun, L

    2001-12-03

    We study adiabatic quantum pumps on time scales that are short relative to the cycle of the pump. In this regime the pump is characterized by the matrix of energy shift which we introduce as the dual to Wigner's time delay. The energy shift determines the charge transport, the dissipation, the noise, and the entropy production. We prove a general lower bound on dissipation in a quantum channel and define optimal pumps as those that saturate the bound. We give a geometric characterization of optimal pumps and show that they are noiseless and transport integral charge in a cycle. Finally we discuss an example of an optimal pump related to the Hall effect.

  6. Optimal control computer programs

    NASA Technical Reports Server (NTRS)

    Kuo, F.

    1992-01-01

    The solution of the optimal control problem, even with low order dynamical systems, can usually strain the analytical ability of most engineers. The understanding of this subject matter, therefore, would be greatly enhanced if a software package existed that could simulate simple generic problems. Surprisingly, despite a great abundance of commercially available control software, few, if any, address the part of optimal control in its most generic form. The purpose of this paper is, therefore, to present a simple computer program that will perform simulations of optimal control problems that arise from the first necessary condition and the Pontryagin's maximum principle.

  7. Optimal domain decomposition strategies

    NASA Technical Reports Server (NTRS)

    Yoon, Yonghyun; Soni, Bharat K.

    1995-01-01

    The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.

  8. Optimal Linear Control.

    DTIC Science & Technology

    1979-12-01

    OPTIMAL LINEAR CONTROL C.A. HARVEY M.G. SAFO NOV G. STEIN J.C. DOYLE HONEYWELL SYSTEMS & RESEARCH CENTER j 2600 RIDGWAY PARKWAY j [ MINNEAPOLIS...RECIPIENT’S CAT ALC-’ W.IMIJUff’? * J~’ CR2 15-238-4F TP P EI)ŕll * (~ Optimal Linear Control ~iOGRPR UBA m a M.G Lnar o Con_ _ _ _ _ _ R PORT__ _ _ I RE...Characterizations of optimal linear controls have been derived, from which guides for selecting the structure of the control system and the weights in

  9. Contingency contractor optimization.

    SciTech Connect

    Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Detry, Richard Joseph; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Nanco, Alan Stewart; Nozick, Linda Karen

    2013-06-01

    The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.

  10. Contingency contractor optimization.

    SciTech Connect

    Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Detry, Richard Joseph; Nanco, Alan Stewart; Nozick, Linda Karen

    2013-10-01

    The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.

  11. Rapid Optimization Library

    SciTech Connect

    Denis Rldzal, Drew Kouri

    2014-05-13

    ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used to solve optimal design problems and inverse problems based on a variety of simulation software.

  12. RF Gun Optimization Study

    SciTech Connect

    Alicia Hofler; Pavel Evtushenko

    2007-07-03

    Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. A genetic algorithm (GA) has been successfully used to optimize DC photo injector designs at Cornell University [1] and Jefferson Lab [2]. We propose to apply GA techniques to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize a system that has been benchmarked with beam measurements and simulation.

  13. Flyby Geometry Optimization Tool

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2007-01-01

    The Flyby Geometry Optimization Tool is a computer program for computing trajectories and trajectory-altering impulsive maneuvers for spacecraft used in radio relay of scientific data to Earth from an exploratory airplane flying in the atmosphere of Mars.

  14. Optimizing influenza vaccine distribution.

    PubMed

    Medlock, Jan; Galvani, Alison P

    2009-09-25

    The criteria to assess public health policies are fundamental to policy optimization. Using a model parametrized with survey-based contact data and mortality data from influenza pandemics, we determined optimal vaccine allocation for five outcome measures: deaths, infections, years of life lost, contingent valuation, and economic costs. We find that optimal vaccination is achieved by prioritization of schoolchildren and adults aged 30 to 39 years. Schoolchildren are most responsible for transmission, and their parents serve as bridges to the rest of the population. Our results indicate that consideration of age-specific transmission dynamics is paramount to the optimal allocation of influenza vaccines. We also found that previous and new recommendations from the U.S. Centers for Disease Control and Prevention both for the novel swine-origin influenza and, particularly, for seasonal influenza, are suboptimal for all outcome measures.

  15. General shape optimization capability

    NASA Technical Reports Server (NTRS)

    Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson

    1991-01-01

    A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.

  16. Depth Optimization Study

    DOE Data Explorer

    Kawase, Mitsuhiro

    2009-11-22

    The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.

  17. Optimized Bolted Joint

    NASA Technical Reports Server (NTRS)

    Hart-Smith, L. J.; Bunin, B. L.; Watts, D. J.

    1986-01-01

    Computer technique aids joint optimization. Load-sharing between fasteners in multirow bolted composite joints computed by nonlinear-analysis computer program. Input to analysis was load-deflection data from 180 specimens tested as part of program to develop technology of structural joints for advanced transport aircraft. Bolt design optimization technique applicable to major joints in composite materials for primary and secondary structures and generally applicable for metal joints as well.

  18. Optimization Of Simulated Trajectories

    NASA Technical Reports Server (NTRS)

    Brauer, Garry L.; Olson, David W.; Stevenson, Robert

    1989-01-01

    Program To Optimize Simulated Trajectories (POST) provides ability to target and optimize trajectories of point-mass powered or unpowered vehicle operating at or near rotating planet. Used successfully to solve wide variety of problems in mechanics of atmospheric flight and transfer between orbits. Generality of program demonstrated by its capability to simulate up to 900 distinct trajectory phases, including generalized models of planets and vehicles. VAX version written in FORTRAN 77 and CDC version in FORTRAN V.

  19. Modeling using optimization routines

    NASA Technical Reports Server (NTRS)

    Thomas, Theodore

    1995-01-01

    Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.

  20. Introduction: optimization in networks.

    PubMed

    Motter, Adilson E; Toroczkai, Zoltan

    2007-06-01

    The recent surge in the network modeling of complex systems has set the stage for a new era in the study of fundamental and applied aspects of optimization in collective behavior. This Focus Issue presents an extended view of the state of the art in this field and includes articles from a large variety of domains in which optimization manifests itself, including physical, biological, social, and technological networked systems.

  1. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1990-09-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. This quarter, an empirical model for predicting pressure drop across a cyclone was developed through a statistical analysis of pressure drop data for 98 cyclone designs. The model is shown to perform better than the pressure drop models of First (1950), Alexander (1949), Barth (1956), Stairmand (1949), and Shepherd-Lapple (1940). This model is used with the efficiency model of Iozia and Leith (1990) to develop an optimization curve which predicts the minimum pressure drop and the dimension rations of the optimized cyclone for a given aerodynamic cut diameter, d{sub 50}. The effect of variation in cyclone height, cyclone diameter, and flow on the optimization curve is determined. The optimization results are used to develop a design procedure for optimized cyclones. 37 refs., 10 figs., 4 tabs.

  2. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  3. Training a quantum optimizer

    NASA Astrophysics Data System (ADS)

    Wecker, Dave; Hastings, Matthew B.; Troyer, Matthias

    2016-08-01

    We study a variant of the quantum approximate optimization algorithm [E. Farhi, J. Goldstone, and S. Gutmann, arXiv:1411.4028] with a slightly different parametrization and a different objective: rather than looking for a state which approximately solves an optimization problem, our goal is to find a quantum algorithm that, given an instance of the maximum 2-satisfiability problem (MAX-2-SAT), will produce a state with high overlap with the optimal state. Using a machine learning approach, we chose a "training set" of instances and optimized the parameters to produce a large overlap for the training set. We then tested these optimized parameters on a larger instance set. As a training set, we used a subset of the hard instances studied by Crosson, Farhi, C. Y.-Y. Lin, H.-H. Lin, and P. Shor (CFLLS) (arXiv:1401.7320). When tested, on the full set, the parameters that we find produce a significantly larger overlap than the optimized annealing times of CFLLS. Testing on other random instances from 20 to 28 bits continues to show improvement over annealing, with the improvement being most notable on the hardest instances. Further tests on instances of MAX-3-SAT also showed improvement on the hardest instances. This algorithm may be a possible application for near-term quantum computers with limited coherence times.

  4. φq-field theory for portfolio optimization: “fat tails” and nonlinear correlations

    NASA Astrophysics Data System (ADS)

    Sornette, D.; Simonetti, P.; Andersen, J. V.

    2000-08-01

    Physics and finance are both fundamentally based on the theory of random walks (and their generalizations to higher dimensions) and on the collective behavior of large numbers of correlated variables. The archetype examplifying this situation in finance is the portfolio optimization problem in which one desires to diversify on a set of possibly dependent assets to optimize the return and minimize the risks. The standard mean-variance solution introduced by Markovitz and its subsequent developments is basically a mean-field Gaussian solution. It has severe limitations for practical applications due to the strongly non-Gaussian structure of distributions and the nonlinear dependence between assets. Here, we present in details a general analytical characterization of the distribution of returns for a portfolio constituted of assets whose returns are described by an arbitrary joint multivariate distribution. In this goal, we introduce a non-linear transformation that maps the returns onto Gaussian variables whose covariance matrix provides a new measure of dependence between the non-normal returns, generalizing the covariance matrix into a nonlinear covariance matrix. This nonlinear covariance matrix is chiseled to the specific fat tail structure of the underlying marginal distributions, thus ensuring stability and good conditioning. The portfolio distribution is then obtained as the solution of a mapping to a so-called φq field theory in particle physics, of which we offer an extensive treatment using Feynman diagrammatic techniques and large deviation theory, that we illustrate in details for multivariate Weibull distributions. The interaction (non-mean field) structure in this field theory is a direct consequence of the non-Gaussian nature of the distribution of asset price returns. We find that minimizing the portfolio variance (i.e. the relatively “small” risks) may often increase the large risks, as measured by higher normalized cumulants. Extensive

  5. Shape optimization and CAD

    NASA Technical Reports Server (NTRS)

    Rasmussen, John

    1990-01-01

    Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are

  6. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  7. Optimization of Heat Exchangers

    SciTech Connect

    Ivan Catton

    2010-10-01

    The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics )pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger disign.

  8. On optimal Bayes detection

    SciTech Connect

    Nielsen, P. |

    1991-08-12

    The following is intended to be a short introduction to the design and analysis of a Bayes-optimal detector, and Middleton`s Locally Optimum Bayes Detector (LOBD). The relationship between these two detectors is clarified. There are three examples of varying complexity included to illustrate the design of these detectors. The final example illustrates the difficulty involved in choosing the bias function for the LOBD. For the examples, the corrupting noise is Gaussian. This allows for a relatively easy solution to the optimal and the LOBD structures. As will be shown, for Bayes detection, the threshold is determined by the costs associated with making a decision and the a priori probabilities of each hypothesis. The threshold of the test cannot be set by simulation. One will notice that the optimal Bayes detector and the LOBD look very much like the Neyman-Pearson optimal and locally optimal detectors respectively. In the latter cases though, the threshold is set by a constraint on the false alarm probability. Note that this allows the threshold to be set by simulation.

  9. On optimal Bayes detection

    SciTech Connect

    Nielsen, P. Arizona Univ., Tucson, AZ . Dept. of Electrical and Computer Engineering)

    1991-08-12

    The following is intended to be a short introduction to the design and analysis of a Bayes-optimal detector, and Middleton's Locally Optimum Bayes Detector (LOBD). The relationship between these two detectors is clarified. There are three examples of varying complexity included to illustrate the design of these detectors. The final example illustrates the difficulty involved in choosing the bias function for the LOBD. For the examples, the corrupting noise is Gaussian. This allows for a relatively easy solution to the optimal and the LOBD structures. As will be shown, for Bayes detection, the threshold is determined by the costs associated with making a decision and the a priori probabilities of each hypothesis. The threshold of the test cannot be set by simulation. One will notice that the optimal Bayes detector and the LOBD look very much like the Neyman-Pearson optimal and locally optimal detectors respectively. In the latter cases though, the threshold is set by a constraint on the false alarm probability. Note that this allows the threshold to be set by simulation.

  10. Optimal three finger grasps

    NASA Technical Reports Server (NTRS)

    Demmel, J.; Lafferriere, G.

    1989-01-01

    Consideration is given to the problem of optimal force distribution among three point fingers holding a planar object. A scheme that reduces the nonlinear optimization problem to an easily solved generalized eigenvalue problem is proposed. This scheme generalizes and simplifies results of Ji and Roth (1988). The generalizations include all possible geometric arrangements and extensions to three dimensions and to the case of variable coefficients of friction. For the two-dimensional case with constant coefficients of friction, it is proved that, except for some special cases, the optimal grasping forces (in the sense of minimizing the dependence on friction) are those for which the angles with the corresponding normals are all equal (in absolute value).

  11. Fuzzy logic controller optimization

    DOEpatents

    Sepe, Jr., Raymond B; Miller, John Michael

    2004-03-23

    A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.

  12. Optimally combined confidence limits

    NASA Astrophysics Data System (ADS)

    Janot, P.; Le Diberder, F.

    1998-02-01

    An analytical and optimal procedure to combine statistically independent sets of confidence levels on a quantity is presented. This procedure does not impose any constraint on the methods followed by each analysis to derive its own limit. It incorporates the a priori statistical power of each of the analyses to be combined, in order to optimize the overall sensitivity. It can, in particular, be used to combine the mass limits obtained by several analyses searching for the Higgs boson in different decay channels, with different selection efficiencies, mass resolution and expected background. It can also be used to combine the mass limits obtained by several experiments (e.g. ALEPH, DELPHI, L3 and OPAL, at LEP 2) independently of the method followed by each of these experiments to derive their own limit. A method to derive the limit set by one analysis is also presented, along with an unbiased prescription to optimize the expected mass limit in the no-signal-hypothesis.

  13. Discrete Variational Optimal Control

    NASA Astrophysics Data System (ADS)

    Jiménez, Fernando; Kobilarov, Marin; Martín de Diego, David

    2013-06-01

    This paper develops numerical methods for optimal control of mechanical systems in the Lagrangian setting. It extends the theory of discrete mechanics to enable the solutions of optimal control problems through the discretization of variational principles. The key point is to solve the optimal control problem as a variational integrator of a specially constructed higher dimensional system. The developed framework applies to systems on tangent bundles, Lie groups, and underactuated and nonholonomic systems with symmetries, and can approximate either smooth or discontinuous control inputs. The resulting methods inherit the preservation properties of variational integrators and result in numerically robust and easily implementable algorithms. Several theoretical examples and a practical one, the control of an underwater vehicle, illustrate the application of the proposed approach.

  14. Dose optimization tool

    NASA Astrophysics Data System (ADS)

    Amir, Ornit; Braunstein, David; Altman, Ami

    2003-05-01

    A dose optimization tool for CT scanners is presented using patient raw data to calculate noise. The tool uses a single patient image which is modified for various lower doses. Dose optimization is carried out without extra measurements by interactively visualizing the dose-induced changes in this image. This tool can be used either off line, on existing image(s) or, as a pre - requisite for dose optimization for the specific patient, during the patient clinical study. The algorithm of low-dose simulation consists of reconstruction of two images from a single measurement and uses those images to create the various lower dose images. This algorithm enables fast simulation of various low dose (mAs) images on a real patient image.

  15. Reverse Osmosis Optimization

    SciTech Connect

    McMordie Stoughton, Kate; Duan, Xiaoli; Wendel, Emily M.

    2013-08-26

    This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). ¬The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.¬

  16. Reverse Osmosis Optimization

    SciTech Connect

    2013-08-01

    This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.

  17. Optimal Composite Curing System

    NASA Astrophysics Data System (ADS)

    Handel, Paul; Guerin, Daniel

    The Optimal Composite Curing System (OCCS) is an intelligent control system which incorporates heat transfer and resin kinetic models coupled with expert knowledge. It controls the curing of epoxy impregnated composites, preventing part overheating while maintaining maximum cure heatup rate. This results in a significant reduction in total cure time over standard methods. The system uses a cure process model, operating in real-time, to determine optimal cure profiles for tool/part configurations of varying thermal characteristics. These profiles indicate the heating and cooling necessary to insure a complete cure of each part in the autoclave in the minimum amount of time. The system coordinates these profiles to determine an optimal cure profile for a batch of thermally variant parts. Using process specified rules for proper autoclave operation, OCCS automatically controls the cure process, implementing the prescribed cure while monitoring the operation of the autoclave equipment.

  18. Optimal symmetric flight studies

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.

    1985-01-01

    Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.

  19. Optimality in neuromuscular systems.

    PubMed

    Theodorou, Evangelos; Valero-Cuevas, Francisco J

    2010-01-01

    We provide an overview of optimal control methods to nonlinear neuromuscular systems and discuss their limitations. Moreover we extend current optimal control methods to their application to neuromuscular models with realistically numerous musculotendons; as most prior work is limited to torque-driven systems. Recent work on computational motor control has explored the used of control theory and estimation as a conceptual tool to understand the underlying computational principles of neuromuscular systems. After all, successful biological systems regularly meet conditions for stability, robustness and performance for multiple classes of complex tasks. Among a variety of proposed control theory frameworks to explain this, stochastic optimal control has become a dominant framework to the point of being a standard computational technique to reproduce kinematic trajectories of reaching movements (see [12]) In particular, we demonstrate the application of optimal control to a neuromuscular model of the index finger with all seven musculotendons producing a tapping task. Our simulations include 1) a muscle model that includes force- length and force-velocity characteristics; 2) an anatomically plausible biomechanical model of the index finger that includes a tendinous network for the extensor mechanism and 3) a contact model that is based on a nonlinear spring-damper attached at the end effector of the index finger. We demonstrate that it is feasible to apply optimal control to systems with realistically large state vectors and conclude that, while optimal control is an adequate formalism to create computational models of neuro-musculoskeletal systems, there remain important challenges and limitations that need to be considered and overcome such as contact transitions, curse of dimensionality, and constraints on states and controls.

  20. Optimized solar module design

    NASA Technical Reports Server (NTRS)

    Santala, T.; Sabol, R.; Carbajal, B. G.

    1978-01-01

    The minimum cost per unit of power output from flat plate solar modules can most likely be achieved through efficient packaging of higher efficiency solar cells. This paper outlines a module optimization method which is broadly applicable, and illustrates the potential results achievable from a specific high efficiency tandem junction (TJ) cell. A mathematical model is used to assess the impact of various factors influencing the encapsulated cell and packing efficiency. The optimization of the packing efficiency is demonstrated. The effect of encapsulated cell and packing efficiency on the module add-on cost is shown in a nomograph form.

  1. Optimal Time Transfer

    DTIC Science & Technology

    2009-11-01

    McGraw-Hill, New York). [16] J. S. Meditch , 1967, “Orthogonal Projection and Discrete Optimal Linear Smoothing ,” SIAM Journal on Control and...Optimization, 5, 74-89. [17] J. S. Meditch , 1973, “A Survey of Data Smoothing for Linear and Nonlinear Dynamic Systems,” Automatica, 9, 151-162... smoothing window forward of each fixed epoch. The length of the smoothing window is bounded above by 5 hours, the maximum time-length of a ground

  2. Multidisciplinary design and optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. This paper outlines techniques for computing these influences as system design derivatives useful to both judgmental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering optimizations and incorporate their design tools.

  3. Terascale Optimal PDE Simulations

    SciTech Connect

    David Keyes

    2009-07-28

    The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.

  4. Neural wiring optimization.

    PubMed

    Cherniak, Christopher

    2012-01-01

    Combinatorial network optimization theory concerns minimization of connection costs among interconnected components in systems such as electronic circuits. As an organization principle, similar wiring minimization can be observed at various levels of nervous systems, invertebrate and vertebrate, including primate, from placement of the entire brain in the body down to the subcellular level of neuron arbor geometry. In some cases, the minimization appears either perfect, or as good as can be detected with current methods. One question such best-of-all-possible-brains results raise is, what is the map of such optimization, does it have a distinct neural domain?

  5. Optimal Quantum Phase Estimation

    SciTech Connect

    Dorner, U.; Smith, B. J.; Lundeen, J. S.; Walmsley, I. A.; Demkowicz-Dobrzanski, R.; Banaszek, K.; Wasilewski, W.

    2009-01-30

    By using a systematic optimization approach, we determine quantum states of light with definite photon number leading to the best possible precision in optical two-mode interferometry. Our treatment takes into account the experimentally relevant situation of photon losses. Our results thus reveal the benchmark for precision in optical interferometry. Although this boundary is generally worse than the Heisenberg limit, we show that the obtained precision beats the standard quantum limit, thus leading to a significant improvement compared to classical interferometers. We furthermore discuss alternative states and strategies to the optimized states which are easier to generate at the cost of only slightly lower precision.

  6. Optimal exploration systems

    NASA Astrophysics Data System (ADS)

    Klesh, Andrew T.

    This dissertation studies optimal exploration, defined as the collection of information about given objects of interest by a mobile agent (the explorer) using imperfect sensors. The key aspects of exploration are kinematics (which determine how the explorer moves in response to steering commands), energetics (which determine how much energy is consumed by motion and maneuvers), informatics (which determine the rate at which information is collected) and estimation (which determines the states of the objects). These aspects are coupled by the steering decisions of the explorer. We seek to improve exploration by finding trade-offs amongst these couplings and the components of exploration: the Mission, the Path and the Agent. A comprehensive model of exploration is presented that, on one hand, accounts for these couplings and on the other hand is simple enough to allow analysis. This model is utilized to pose and solve several exploration problems where an objective function is to be minimized. Specific functions to be considered are the mission duration and the total energy. These exploration problems are formulated as optimal control problems and necessary conditions for optimality are obtained in the form of two-point boundary value problems. An analysis of these problems reveals characteristics of optimal exploration paths. Several regimes are identified for the optimal paths including the Watchtower, Solar and Drag regime, and several non-dimensional parameters are derived that determine the appropriate regime of travel. The so-called Power Ratio is shown to predict the qualitative features of the optimal paths, provide a metric to evaluate an aircrafts design and determine an aircrafts capability for flying perpetually. Optimal exploration system drivers are identified that provide perspective as to the importance of these various regimes of flight. A bank-to-turn solar-powered aircraft flying at constant altitude on Mars is used as a specific platform for

  7. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  8. Numerical Optimization of Synergetic Maneuvers

    DTIC Science & Technology

    1994-06-01

    optimality conditions is termed the Karush-Kuln-Tucker ( KKT ) conditions . These necessary ...CONVERGENCE ................................................. 15 1. Su mnary Of Convexity and Optimality Conditions .............................. 15 2...point x. A point x* is called the optimal solution to the problem. 14 B. ALGORITHMS AND CONVERGENCE 1. Summary of Convexity and Optimality Conditions

  9. Orbital-Maneuver-Sequence Optimization

    DTIC Science & Technology

    1985-12-01

    optimization computer program and applied it to the generation of optimal cog-brbital attack4ianeuver sequences * and to the generation of optimal evasions...maneuver-sequence- optimization computer programs can be improved by a general restructuring and streamlining and the addition of various features. It is...believed that with further development and systematic testing the programs have potential for real-time generation of optimal maneuver sequences in an

  10. Numerical-Optimization Program

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garret N.

    1991-01-01

    Automated Design Synthesis (ADS) computer program is general-purpose numerical-optimization program for design engineering. Provides wide range of options for solution of constrained and unconstrained function minimization problems. Suitable for such applications as minimum-weight design. Written in FORTRAN 77.

  11. Optimizing Computer Technology Integration

    ERIC Educational Resources Information Center

    Dillon-Marable, Elizabeth; Valentine, Thomas

    2006-01-01

    The purpose of this study was to better understand what optimal computer technology integration looks like in adult basic skills education (ABSE). One question guided the research: How is computer technology integration best conceptualized and measured? The study used the Delphi method to map the construct of computer technology integration and…

  12. Optimal Facility-Location.

    PubMed

    Goldman, A J

    2006-01-01

    Dr. Christoph Witzgall, the honoree of this Symposium, can count among his many contributions to applied mathematics and mathematical operations research a body of widely-recognized work on the optimal location of facilities. The present paper offers to non-specialists a sketch of that field and its evolution, with emphasis on areas most closely related to Witzgall's research at NBS/NIST.

  13. Fourier Series Optimization Opportunity

    ERIC Educational Resources Information Center

    Winkel, Brian

    2008-01-01

    This note discusses the introduction of Fourier series as an immediate application of optimization of a function of more than one variable. Specifically, it is shown how the study of Fourier series can be motivated to enrich a multivariable calculus class. This is done through discovery learning and use of technology wherein students build the…

  14. Optimization in Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Marsden, Alison L.

    2014-01-01

    Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.

  15. Optimization in Ecology

    ERIC Educational Resources Information Center

    Cody, Martin L.

    1974-01-01

    Discusses the optimality of natural selection, ways of testing for optimum solutions to problems of time - or energy-allocation in nature, optimum patterns in spatial distribution and diet breadth, and how best to travel over a feeding area so that food intake is maximized. (JR)

  16. Optimal Ski Jump

    ERIC Educational Resources Information Center

    Rebilas, Krzysztof

    2013-01-01

    Consider a skier who goes down a takeoff ramp, attains a speed "V", and jumps, attempting to land as far as possible down the hill below (Fig. 1). At the moment of takeoff the angle between the skier's velocity and the horizontal is [alpha]. What is the optimal angle [alpha] that makes the jump the longest possible for the fixed magnitude of the…

  17. Optimizing Conferencing Freeware

    ERIC Educational Resources Information Center

    Baggaley, Jon; Klaas, Jim; Wark, Norine; Depow, Jim

    2005-01-01

    The increasing range of options provided by two popular conferencing freeware products, "Yahoo Messenger" and "MSN Messenger," are discussed. Each tool contains features designed primarily for entertainment purposes, which can be customized for use in online education. This report provides suggestions for optimizing the educational potential of…

  18. Optimal Periodic Control Theory.

    DTIC Science & Technology

    1980-08-01

    are control variables. For many aircraft, this energy state space produces a hodograph which is not convex. The physical explanation for this is that...convexity in the hodograph and preserve an "optimal" steady-state cruise, Schultz and Zagalsky [61 revised the energy state model so that altitude becomes a

  19. Toward Optimal Transport Networks

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Kincaid, Rex K.; Vargo, Erik P.

    2008-01-01

    Strictly evolutionary approaches to improving the air transport system a highly complex network of interacting systems no longer suffice in the face of demand that is projected to double or triple in the near future. Thus evolutionary approaches should be augmented with active design methods. The ability to actively design, optimize and control a system presupposes the existence of predictive modeling and reasonably well-defined functional dependences between the controllable variables of the system and objective and constraint functions for optimization. Following recent advances in the studies of the effects of network topology structure on dynamics, we investigate the performance of dynamic processes on transport networks as a function of the first nontrivial eigenvalue of the network's Laplacian, which, in turn, is a function of the network s connectivity and modularity. The last two characteristics can be controlled and tuned via optimization. We consider design optimization problem formulations. We have developed a flexible simulation of network topology coupled with flows on the network for use as a platform for computational experiments.

  20. Is Optimism Real?

    ERIC Educational Resources Information Center

    Simmons, Joseph P.; Massey, Cade

    2012-01-01

    Is optimism real, or are optimistic forecasts just cheap talk? To help answer this question, we investigated whether optimistic predictions persist in the face of large incentives to be accurate. We asked National Football League football fans to predict the winner of a single game. Roughly half (the partisans) predicted a game involving their…

  1. Optimization of digital designs

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor)

    2009-01-01

    An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.

  2. Optimal GENCO bidding strategy

    NASA Astrophysics Data System (ADS)

    Gao, Feng

    Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed

  3. An optimal structural design algorithm using optimality criteria

    NASA Technical Reports Server (NTRS)

    Taylor, J. E.; Rossow, M. P.

    1976-01-01

    An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.

  4. Optimization of Anguilliform Swimming

    NASA Astrophysics Data System (ADS)

    Kern, Stefan; Koumoutsakos, Petros

    2006-03-01

    Anguilliform swimming is investigated by 3D computer simulations coupling the dynamics of an undulating eel-like body with the surrounding viscous fluid flow. The body is self-propelled and, in contrast to previous computational studies of swimming, the motion pattern is not prescribed a priori but obtained by an evolutionary optimization procedure. Two different objective functions are used to characterize swimming efficiency and maximum swimming velocity with limited input power. The found optimal motion patterns represent two distinct swimming modes corresponding to migration, and burst swimming, respectively. The results support the hypothesis from observations of real animals that eels can modify their motion pattern generating wakes that reflect their propulsive mode. Unsteady drag and thrust production of the swimming body are thoroughly analyzed by recording the instantaneous fluid forces acting on partitions of the body surface.

  5. Optimal Behavioral Hierarchy

    PubMed Central

    Córdova, Natalia; Yee, Debbie; Barto, Andrew G.; Niv, Yael; Botvinick, Matthew M.

    2014-01-01

    Human behavior has long been recognized to display hierarchical structure: actions fit together into subtasks, which cohere into extended goal-directed activities. Arranging actions hierarchically has well established benefits, allowing behaviors to be represented efficiently by the brain, and allowing solutions to new tasks to be discovered easily. However, these payoffs depend on the particular way in which actions are organized into a hierarchy, the specific way in which tasks are carved up into subtasks. We provide a mathematical account for what makes some hierarchies better than others, an account that allows an optimal hierarchy to be identified for any set of tasks. We then present results from four behavioral experiments, suggesting that human learners spontaneously discover optimal action hierarchies. PMID:25122479

  6. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1990-06-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. During the past quarter, we have nearly completed modeling work that employs the flow field measurements made during the past six months. In addition, we have begun final work using the results of this project to develop improved design methods for cyclones. This work involves optimization using the Iozia-Leith efficiency model and the Dirgo pressure drop model. This work will be completed this summer. 9 figs.

  7. Optimal Electric Utility Expansion

    SciTech Connect

    1989-10-10

    SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansion configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.

  8. Heliostat cost optimization study

    NASA Astrophysics Data System (ADS)

    von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus

    2016-05-01

    This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.

  9. EAF Management Optimization

    NASA Astrophysics Data System (ADS)

    Costoiu, M.; Ioana, A.; Semenescu, A.; Marcu, D.

    2016-11-01

    The article presents the main advantages of electric arc furnace (EAF): it has a great contribution to reintroduce significant quantities of reusable metallic materials in the economic circuit, it constitutes itself as an important part in the Primary Materials and Energy Recovery (PMER), good productivity, good quality / price ratio, the possibility of developing a wide variety of classes and types of steels, including special steels and high alloy. In this paper it is presented some important developments of electric arc furnace: vacuum electric arc furnace, artificial intelligence expert systems for pollution control Steelworks. Another important aspect presented in the article is an original block diagram for optimization the EAF management system. This scheme is based on the original objective function (criterion function) represented by the price / quality ratio. The article presents an original block diagram for optimization the control system of the EAF. For designing this concept of EAF management system, many principles were used.

  10. Combinatorial optimization games

    SciTech Connect

    Deng, X.; Ibaraki, Toshihide; Nagamochi, Hiroshi

    1997-06-01

    We introduce a general integer programming formulation for a class of combinatorial optimization games, which immediately allows us to improve the algorithmic result for finding amputations in the core (an important solution concept in cooperative game theory) of the network flow game on simple networks by Kalai and Zemel. An interesting result is a general theorem that the core for this class of games is nonempty if and only if a related linear program has an integer optimal solution. We study the properties for this mathematical condition to hold for several interesting problems, and apply them to resolve algorithmic and complexity issues for their cores along the line as put forward in: decide whether the core is empty; if the core is empty, find an imputation in the core; given an imputation x, test whether x is in the core. We also explore the properties of totally balanced games in this succinct formulation of cooperative games.

  11. Trajectory Optimization: OTIS 4

    NASA Technical Reports Server (NTRS)

    Riehl, John P.; Sjauw, Waldy K.; Falck, Robert D.; Paris, Stephen W.

    2010-01-01

    The latest release of the Optimal Trajectories by Implicit Simulation (OTIS4) allows users to simulate and optimize aerospace vehicle trajectories. With OTIS4, one can seamlessly generate optimal trajectories and parametric vehicle designs simultaneously. New features also allow OTIS4 to solve non-aerospace continuous time optimal control problems. The inputs and outputs of OTIS4 have been updated extensively from previous versions. Inputs now make use of objectoriented constructs, including one called a metastring. Metastrings use a greatly improved calculator and common nomenclature to reduce the user s workload. They allow for more flexibility in specifying vehicle physical models, boundary conditions, and path constraints. The OTIS4 calculator supports common mathematical functions, Boolean operations, and conditional statements. This allows users to define their own variables for use as outputs, constraints, or objective functions. The user-defined outputs can directly interface with other programs, such as spreadsheets, plotting packages, and visualization programs. Internally, OTIS4 has more explicit and implicit integration procedures, including high-order collocation methods, the pseudo-spectral method, and several variations of multiple shooting. Users may switch easily between the various methods. Several unique numerical techniques such as automated variable scaling and implicit integration grid refinement, support the integration methods. OTIS4 is also significantly more user friendly than previous versions. The installation process is nearly identical on various platforms, including Microsoft Windows, Apple OS X, and Linux operating systems. Cross-platform scripts also help make the execution of OTIS and post-processing of data easier. OTIS4 is supplied free by NASA and is subject to ITAR (International Traffic in Arms Regulations) restrictions. Users must have a Fortran compiler, and a Python interpreter is highly recommended.

  12. Singularity in structural optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1993-01-01

    The conditions under which global and local singularities may arise in structural optimization are examined. Examples of these singularities are presented, and a framework is given within which the singularities can be recognized. It is shown, in particular, that singularities can be identified through the analysis of stress-displacement relations together with compatibility conditions or the displacement-stress relations derived by the integrated force method of structural analysis. Methods of eliminating the effects of singularities are suggested and illustrated numerically.

  13. HOMER® Micropower Optimization Model

    SciTech Connect

    Lilienthal, P.

    2005-01-01

    NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.

  14. Center for Parallel Optimization

    DTIC Science & Technology

    1993-09-30

    34, University of Wisconsin Computer Sciences Technical Report # 998, 1991, to appear, Linear Algebra and Its Applications. 29. K.P. Bennett & O.L...Robust linear programming discrimination of two lineally inseparable sets, Optimization Methods and Software 1, 1992, 23-34. 4. M.C. Ferris and O.L...variational in equality problems. Linear Algebra and Its Applications 174, 1992, 153-164. 9. O.L. Mangasarian and R.R. Meyer, Proceedings of the

  15. Optimal Centroid Position Estimation

    SciTech Connect

    Candy, J V; McClay, W A; Awwal, A S; Ferguson, S W

    2004-07-23

    The alignment of high energy laser beams for potential fusion experiments demand high precision and accuracy by the underlying positioning algorithms. This paper discusses the feasibility of employing online optimal position estimators in the form of model-based processors to achieve the desired results. Here we discuss the modeling, development, implementation and processing of model-based processors applied to both simulated and actual beam line data.

  16. Optimized lithium oxyhalide cells

    NASA Astrophysics Data System (ADS)

    Kilroy, W. P.; Schlaikjer, C.; Polsonetti, P.; Jones, M.

    1993-04-01

    Lithium thionyl chloride cells were optimized with respect to electrolyte and carbon cathode composition. Wound 'C-size' cells with various mixtures of Chevron acetylene black with Ketjenblack EC-300J and containing various concentrations of LiAlCl4 and derivatives, LiGaCl4, and mixtures of SOCl2 and SO2Cl2 were evaluated as a function of discharge rate, temperature, and storage condition.

  17. Fault Tolerant Optimal Control.

    DTIC Science & Technology

    1982-08-01

    i k+l since the cost to be minimized in (D.2.3) increases withXk (for fixed xsk). When we have b k _ x~ ji ] Aj M 2a(j) R(j) x bOk +l x]rkt] -b (j...22, pp. 236-239. 69. D.D.Sworder and L.L. Choi (1976): Stationary Cost Densities for Optimally Controlled Stochastic Systems, IEEE Trans. Automatic

  18. Optimize acid gas removal

    SciTech Connect

    Nicholas, D.M.; Wilkins, J.T.

    1983-09-01

    Innovative design of physical solvent plants for acid gas removal can materially reduce both installation and operating costs. A review of the design considerations for one physical solvent process (Selexol) points to numerous arrangements for potential improvement. These are evaluated for a specific case in four combinations that identify an optimum for the case in question but, more importantly, illustrate the mechanism for use for such optimization elsewhere.

  19. Optimal shutdown management

    NASA Astrophysics Data System (ADS)

    Bottasso, C. L.; Croce, A.; Riboldi, C. E. D.

    2014-06-01

    The paper presents a novel approach for the synthesis of the open-loop pitch profile during emergency shutdowns. The problem is of interest in the design of wind turbines, as such maneuvers often generate design driving loads on some of the machine components. The pitch profile synthesis is formulated as a constrained optimal control problem, solved numerically using a direct single shooting approach. A cost function expressing a compromise between load reduction and rotor overspeed is minimized with respect to the unknown blade pitch profile. Constraints may include a load reduction not-to-exceed the next dominating loads, a not-to-be-exceeded maximum rotor speed, and a maximum achievable blade pitch rate. Cost function and constraints are computed over a possibly large number of operating conditions, defined so as to cover as well as possible the operating situations encountered in the lifetime of the machine. All such conditions are simulated by using a high-fidelity aeroservoelastic model of the wind turbine, ensuring the accuracy of the evaluation of all relevant parameters. The paper demonstrates the capabilities of the novel proposed formulation, by optimizing the pitch profile of a multi-MW wind turbine. Results show that the procedure can reliably identify optimal pitch profiles that reduce design-driving loads, in a fully automated way.

  20. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  1. Optimizing Research Payoff.

    PubMed

    Miller, Jeff; Ulrich, Rolf

    2016-09-01

    In this article, we present a model for determining how total research payoff depends on researchers' choices of sample sizes, α levels, and other parameters of the research process. The model can be used to quantify various trade-offs inherent in the research process and thus to balance competing goals, such as (a) maximizing both the number of studies carried out and also the statistical power of each study, (b) minimizing the rates of both false positive and false negative findings, and (c) maximizing both replicability and research efficiency. Given certain necessary information about a research area, the model can be used to determine the optimal values of sample size, statistical power, rate of false positives, rate of false negatives, and replicability, such that overall research payoff is maximized. More specifically, the model shows how the optimal values of these quantities depend upon the size and frequency of true effects within the area, as well as the individual payoffs associated with particular study outcomes. The model is particularly relevant within current discussions of how to optimize the productivity of scientific research, because it shows which aspects of a research area must be considered and how these aspects combine to determine total research payoff.

  2. Optimal Gaussian entanglement swapping

    SciTech Connect

    Hoelscher-Obermaier, Jason; Loock, Peter van

    2011-01-15

    We consider entanglement swapping with general mixed two-mode Gaussian states and calculate the optimal gains for a broad class of such states including those states most relevant in communication scenarios. We show that, for this class of states, entanglement swapping adds no additional mixedness; that is, the ensemble-average output state has the same purity as the input states. This implies that, by using intermediate entanglement swapping steps, it is, in principle, possible to distribute entangled two-mode Gaussian states of higher purity as compared to direct transmission. We then apply the general results on optimal Gaussian swapping to the problem of quantum communication over a lossy fiber and demonstrate that, in contrast to the negative conclusions in the literature, swapping-based schemes in fact often perform better than direct transmission for high input squeezing. However, an effective transmission analysis reveals that the hope for improved performance based on optimal Gaussian entanglement swapping is spurious since the swapping does not lead to an enhancement of the effective transmission. This implies that the same or better results can always be obtained using direct transmission in combination with, in general, less squeezing.

  3. Optimization by record dynamics

    NASA Astrophysics Data System (ADS)

    Barettin, Daniele; Sibani, Paolo

    2014-03-01

    Large dynamical changes in thermalizing glassy systems are triggered by trajectories crossing record sized barriers, a behavior revealing the presence of a hierarchical structure in configuration space. The observation is here turned into a novel local search optimization algorithm dubbed record dynamics optimization, or RDO. RDO uses the Metropolis rule to accept or reject candidate solutions depending on the value of a parameter akin to the temperature and minimizes the cost function of the problem at hand through cycles where its ‘temperature’ is raised and subsequently decreased in order to expediently generate record high (and low) values of the cost function. Below, RDO is introduced and then tested by searching for the ground state of the Edwards-Anderson spin-glass model, in two and three spatial dimensions. A popular and highly efficient optimization algorithm, parallel tempering (PT), is applied to the same problem as a benchmark. RDO and PT turn out to produce solutions of similar quality for similar numerical effort, but RDO is simpler to program and additionally yields geometrical information on the system’s configuration space which is of interest in many applications. In particular, the effectiveness of RDO strongly indicates the presence of the above mentioned hierarchically organized configuration space, with metastable regions indexed by the cost (or energy) of the transition states connecting them.

  4. AC Optimal Power Flow

    SciTech Connect

    2016-10-04

    In this work, we have implemented and developed the simulation software to implement the mathematical model of an AC Optimal Power Flow (OPF) problem. The objective function is to minimize the total cost of generation subject to constraints of node power balance (both real and reactive) and line power flow limits (MW, MVAr, and MVA). We have currently implemented the polar coordinate version of the problem. In the present work, we have used the optimization solver, Knitro (proprietary and not included in this software) to solve the problem and we have kept option for both the native numerical derivative evaluation (working satisfactorily now) as well as for analytical formulas corresponding to the derivatives being provided to Knitro (currently, in the debugging stage). Since the AC OPF is a highly non-convex optimization problem, we have also kept the option for a multistart solution. All of these can be decided by the user during run-time in an interactive manner. The software has been developed in C++ programming language, running with GCC compiler on a Linux machine. We have tested for satisfactory results against Matpower for the IEEE 14 bus system.

  5. Optimal Temporal Risk Assessment

    PubMed Central

    Balci, Fuat; Freestone, David; Simen, Patrick; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

    2011-01-01

    Time is an essential feature of most decisions, because the reward earned from decisions frequently depends on the temporal statistics of the environment (e.g., on whether decisions must be made under deadlines). Accordingly, evolution appears to have favored a mechanism that predicts intervals in the seconds to minutes range with high accuracy on average, but significant variability from trial to trial. Importantly, the subjective sense of time that results is sufficiently imprecise that maximizing rewards in decision-making can require substantial behavioral adjustments (e.g., accumulating less evidence for a decision in order to beat a deadline). Reward maximization in many daily decisions therefore requires optimal temporal risk assessment. Here, we review the temporal decision-making literature, conduct secondary analyses of relevant published datasets, and analyze the results of a new experiment. The paper is organized in three parts. In the first part, we review literature and analyze existing data suggesting that animals take account of their inherent behavioral variability (their “endogenous timing uncertainty”) in temporal decision-making. In the second part, we review literature that quantitatively demonstrates nearly optimal temporal risk assessment with sub-second and supra-second intervals using perceptual tasks (with humans and mice) and motor timing tasks (with humans). We supplement this section with original research that tested human and rat performance on a task that requires finding the optimal balance between two time-dependent quantities for reward maximization. This optimal balance in turn depends on the level of timing uncertainty. Corroborating the reviewed literature, humans and rats exhibited nearly optimal temporal risk assessment in this task. In the third section, we discuss the role of timing uncertainty in reward maximization in two-choice perceptual decision-making tasks and review literature that implicates timing uncertainty

  6. Optimal Power Flow Pursuit

    SciTech Connect

    Dall'Anese, Emiliano

    2016-08-01

    Past works that focused on addressing power-quality and reliability concerns related to renewable energy resources (RESs) operating with business-as-usual practices have looked at the design of Volt/VAr and Volt/Watt strategies to regulate real or reactive powers based on local voltage measurements, so that terminal voltages are within acceptable levels. These control strategies have the potential of operating at the same time scale of distribution-system dynamics, and can therefore mitigate disturbances precipitated fast time-varying loads and ambient conditions; however, they do not necessarily guarantee system-level optimality, and stability claims are mainly based on empirical evidences. On a different time scale, centralized and distributed optimal power flow (OPF) algorithms have been proposed to compute optimal steady-state inverter setpoints, so that power losses and voltage deviations are minimized and economic benefits to end-users providing ancillary services are maximized. However, traditional OPF schemes may offer decision making capabilities that do not match the dynamics of distribution systems. Particularly, during the time required to collect data from all the nodes of the network (e.g., loads), solve the OPF, and subsequently dispatch setpoints, the underlying load, ambient, and network conditions may have already changed; in this case, the DER output powers would be consistently regulated around outdated setpoints, leading to suboptimal system operation and violation of relevant electrical limits. The present work focuses on the synthesis of distributed RES-inverter controllers that leverage the opportunities for fast feedback offered by power-electronics interfaced RESs. The overarching objective is to bridge the temporal gap between long-term system optimization and real-time control, to enable seamless RES integration in large scale with stability and efficiency guarantees, while congruently pursuing system-level optimization objectives. The

  7. Ames Optimized TCA Configuration

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Reuther, James J.; Hicks, Raymond M.

    1999-01-01

    Configuration design at Ames was carried out with the SYN87-SB (single block) Euler code using a 193 x 49 x 65 C-H grid. The Euler solver is coupled to the constrained (NPSOL) and the unconstrained (QNMDIF) optimization packages. Since the single block grid is able to model only wing-body configurations, the nacelle/diverter effects were included in the optimization process by SYN87's option to superimpose the nacelle/diverter interference pressures on the wing. These interference pressures were calculated using the AIRPLANE code. AIRPLANE is an Euler solver that uses a unstructured tetrahedral mesh and is capable of computations about arbitrary complete configurations. In addition, the buoyancy effects of the nacelle/diverters were also included in the design process by imposing the pressure field obtained during the design process onto the triangulated surfaces of the nacelle/diverter mesh generated by AIRPLANE. The interference pressures and nacelle buoyancy effects are added to the final forces after each flow field calculation. Full details of the (recently enhanced) ghost nacelle capability are given in a related talk. The pseudo nacelle corrections were greatly improved during this design cycle. During the Ref H and Cycle 1 design activities, the nacelles were only translated and pitched. In the cycle 2 design effort the nacelles can translate vertically, and pitch to accommodate the changes in the lower surface geometry. The diverter heights (between their leading and trailing edges) were modified during design as the shape of the lower wing changed, with the drag of the diverter changing accordingly. Both adjoint and finite difference gradients were used during optimization. The adjoint-based gradients were found to give good direction in the design space for configurations near the starting point, but as the design approached a minimum, the finite difference gradients were found to be more accurate. Use of finite difference gradients was limited by the

  8. Taking Stock of Unrealistic Optimism.

    PubMed

    Shepperd, James A; Klein, William M P; Waters, Erika A; Weinstein, Neil D

    2013-07-01

    Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type-the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention.

  9. Taking Stock of Unrealistic Optimism

    PubMed Central

    Shepperd, James A.; Klein, William M. P.; Waters, Erika A.; Weinstein, Neil D.

    2015-01-01

    Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type—the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention. PMID:26045714

  10. Computer program for parameter optimization

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.

    1968-01-01

    Flexible, large scale digital computer program was designed for the solution of a wide range of multivariable parameter optimization problems. The program has the ability to solve constrained optimization problems involving up to one hundred parameters.

  11. Combinatorial optimization in foundry practice

    NASA Astrophysics Data System (ADS)

    Antamoshkin, A. N.; Masich, I. S.

    2016-04-01

    The multicriteria mathematical model of foundry production capacity planning is suggested in the paper. The model is produced in terms of pseudo-Boolean optimization theory. Different search optimization methods were used to solve the obtained problem.

  12. A Primer on Unrealistic Optimism.

    PubMed

    Shepperd, James A; Waters, Erika; Weinstein, Neil D; Klein, William M P

    2015-06-01

    People display unrealistic optimism in their predictions for countless events, believing that their personal future outcomes will be more desirable than can possibly be true. We summarize the vast literature on unrealistic optimism by focusing on four broad questions: What is unrealistic optimism; when does it occur; why does it occur; and what are its consequences.

  13. Optimism and Well-Being

    ERIC Educational Resources Information Center

    Reivich, Karen

    2010-01-01

    Dictionary definitions of optimism encompass two related concepts. The first of these is a hopeful disposition or a conviction that good will ultimately prevail. The second, broader conception of optimism refers to the belief, or the inclination to believe, that the world is the best of all possible worlds. In psychological research, optimism has…

  14. Multicriteria VMAT optimization

    SciTech Connect

    Craft, David; McQuaid, Dualta; Wala, Jeremiah; Chen, Wei; Salari, Ehsan; Bortfeld, Thomas

    2012-02-15

    Purpose: To make the planning of volumetric modulated arc therapy (VMAT) faster and to explore the tradeoffs between planning objectives and delivery efficiency. Methods: A convex multicriteria dose optimization problem is solved for an angular grid of 180 equi-spaced beams. This allows the planner to navigate the ideal dose distribution Pareto surface and select a plan of desired target coverage versus organ at risk sparing. The selected plan is then made VMAT deliverable by a fluence map merging and sequencing algorithm, which combines neighboring fluence maps based on a similarity score and then delivers the merged maps together, simplifying delivery. Successive merges are made as long as the dose distribution quality is maintained. The complete algorithm is called VMERGE. Results: VMERGE is applied to three cases: a prostate, a pancreas, and a brain. In each case, the selected Pareto-optimal plan is matched almost exactly with the VMAT merging routine, resulting in a high quality plan delivered with a single arc in less than 5 min on average. Conclusions: VMERGE offers significant improvements over existing VMAT algorithms. The first is the multicriteria planning aspect, which greatly speeds up planning time and allows the user to select the plan, which represents the most desirable compromise between target coverage and organ at risk sparing. The second is the user-chosen epsilon-optimality guarantee of the final VMAT plan. Finally, the user can explore the tradeoff between delivery time and plan quality, which is a fundamental aspect of VMAT that cannot be easily investigated with current commercial planning systems.

  15. Experience in grid optimization

    NASA Technical Reports Server (NTRS)

    Mastin, C. W.; Soni, B. K.; Mcclure, M. D.

    1987-01-01

    Two optimization methods for solving a variational problem in grid generation are described and evaluated. The smoothness, cell volumes, and orthogonality of the variational integrals are examined. The Jacobi-Newton iterative method is compared to the Fletcher-Reeves conjugate gradient method. It is observed that a combination of the Jacobi-Newton iteration and the direct solution of the variational problem produces an algorithm which is easy to program and requires less storage and computer time/iteration than the conjugate gradient method.

  16. Optimizing Methods in Simulation

    DTIC Science & Technology

    1981-08-01

    exploited by Kiefer and Wolfowitz -; (1959). Wald (1943) used the criterion of D-optimality - in some other context and was so named by Kiefer and...of discrepency between the observed and expected value A is obtained in terms of mean squared errors ( MSE ). i Consider the model, E(Ylx) = a + ex and...V(YIX) = 0 2 Let L < x < U, be the interval of possible x values. The MSE (x) is the mean squared error of x as obtained from y. Let w(x) be a weight

  17. Astrocytes optimize synaptic fidelity

    NASA Astrophysics Data System (ADS)

    Nadkarni, Suhita; Jung, Peter; Levine, Herbert

    2007-03-01

    Most neuronal synapses in the central nervous system are enwrapped by an astrocytic process. This relation allows the astrocyte to listen to and feed back to the synapse and to regulate synaptic transmission. We combine a tested mathematical model for the Ca^2+ response of the synaptic astrocyte and presynaptic feedback with a detailed model for vesicle release of neurotransmitter at active zones. The predicted Ca^2+ dependence of the presynaptic synaptic vesicle release compares favorably for several types of synapses, including the Calyx of Held. We hypothesize that the feedback regulation of the astrocyte onto the presynaptic terminal optimizes the fidelity of the synapse in terms of information transmission.

  18. Universal optimal quantum correlator

    NASA Astrophysics Data System (ADS)

    Buscemi, Francesco; Dall'Arno, Michele; Ozawa, Masanao; Vedral, Vlatko

    2014-10-01

    Recently, a novel operational strategy to access quantum correlation functions of the form Tr[AρB] was provided in [F. Buscemi, M. Dall'Arno, M. Ozawa and V. Vedral, arXiv:1312.4240]. Here we propose a realization scheme, that we call partial expectation values, implementing such strategy in terms of a unitary interaction with an ancillary system followed by the measurement of an observable on the ancilla. Our scheme is universal, being independent of ρ, A, and B, and it is optimal in a statistical sense. Our scheme is suitable for implementation with present quantum optical technology, and provides a new way to test uncertainty relations.

  19. Optimizing passive quantum clocks

    NASA Astrophysics Data System (ADS)

    Mullan, Michael; Knill, Emanuel

    2014-10-01

    We describe protocols for passive atomic clocks based on quantum interrogation of the atoms. Unlike previous techniques, our protocols are adaptive and take advantage of prior information about the clock's state. To reduce deviations from an ideal clock, each interrogation is optimized by means of a semidefinite program for atomic state preparation and measurement whose objective function depends on the prior information. Our knowledge of the clock's state is maintained according to a Bayesian model that accounts for noise and measurement results. We implement a full simulation of a running clock with power-law noise models and find significant improvements by applying our techniques.

  20. Nonconvex optimization and jamming

    NASA Astrophysics Data System (ADS)

    Kallus, Yoav

    Recent work on the jamming transition of particles with short-range interactions has drawn connections with models based on minimization problems with linear inequality constraints and a concave objective. These properties reduce the continuous optimization problem to a discrete search among the corners of the feasible polytope. I will discuss results from simulations of models with and without quenched disorder, exhibiting critical power laws, scaling collapse, and protocol dependence. These models are also well-suited for study using tools of algebraic topology, which I will discuss briefly. Supported by an Omidyar Fellowship at the Santa Fe Institute.

  1. Constructing optimal entanglement witnesses

    SciTech Connect

    Chruscinski, Dariusz; Pytel, Justyna; Sarbicki, Gniewomir

    2009-12-15

    We provide a class of indecomposable entanglement witnesses. In 4x4 case, it reproduces the well-known Breuer-Hall witness. We prove that these witnesses are optimal and atomic, i.e., they are able to detect the 'weakest' quantum entanglement encoded into states with positive partial transposition. Equivalently, we provide a construction of indecomposable atomic maps in the algebra of 2kx2k complex matrices. It is shown that their structural physical approximations give rise to entanglement breaking channels. This result supports recent conjecture by Korbicz et al. [Phys. Rev. A 78, 062105 (2008)].

  2. Optimized joystick controller.

    PubMed

    Ding, D; Cooper, R A; Spaeth, D

    2004-01-01

    The purpose of the study was to develop an optimized joystick control interface for electric powered wheelchairs and thus provide safe and effective control of electric powered wheelchairs to people with severe physical disabilities. The interface enables clinicians to tune joystick parameters for each individual subject through selecting templates, dead zones, and bias axes. In terms of hand tremor usually associated with people with traumatic brain injury, cerebral palsy, and multiple sclerosis, fuzzy logic rules were applied to suppress erratic hand movements and extract the intended motion from the joystick. Simulation results were presented to show the graphical tuning interface as well as the performance of the fuzzy logic controller.

  3. An Improved Cockroach Swarm Optimization

    PubMed Central

    Obagbuwa, I. C.; Adewumi, A. O.

    2014-01-01

    Hunger component is introduced to the existing cockroach swarm optimization (CSO) algorithm to improve its searching ability and population diversity. The original CSO was modelled with three components: chase-swarming, dispersion, and ruthless; additional hunger component which is modelled using partial differential equation (PDE) method is included in this paper. An improved cockroach swarm optimization (ICSO) is proposed in this paper. The performance of the proposed algorithm is tested on well known benchmarks and compared with the existing CSO, modified cockroach swarm optimization (MCSO), roach infestation optimization RIO, and hungry roach infestation optimization (HRIO). The comparison results show clearly that the proposed algorithm outperforms the existing algorithms. PMID:24959611

  4. Design Optimization Toolkit: Users' Manual

    SciTech Connect

    Aguilo Valentin, Miguel Alejandro

    2014-07-01

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLAB command window.

  5. MOMMOP: multiobjective optimization for locating multiple optimal solutions of multimodal optimization problems.

    PubMed

    Wang, Yong; Li, Han-Xiong; Yen, Gary G; Song, Wu

    2015-04-01

    In the field of evolutionary computation, there has been a growing interest in applying evolutionary algorithms to solve multimodal optimization problems (MMOPs). Due to the fact that an MMOP involves multiple optimal solutions, many niching methods have been suggested and incorporated into evolutionary algorithms for locating such optimal solutions in a single run. In this paper, we propose a novel transformation technique based on multiobjective optimization for MMOPs, called MOMMOP. MOMMOP transforms an MMOP into a multiobjective optimization problem with two conflicting objectives. After the above transformation, all the optimal solutions of an MMOP become the Pareto optimal solutions of the transformed problem. Thus, multiobjective evolutionary algorithms can be readily applied to find a set of representative Pareto optimal solutions of the transformed problem, and as a result, multiple optimal solutions of the original MMOP could also be simultaneously located in a single run. In principle, MOMMOP is an implicit niching method. In this paper, we also discuss two issues in MOMMOP and introduce two new comparison criteria. MOMMOP has been used to solve 20 multimodal benchmark test functions, after combining with nondominated sorting and differential evolution. Systematic experiments have indicated that MOMMOP outperforms a number of methods for multimodal optimization, including four recent methods at the 2013 IEEE Congress on Evolutionary Computation, four state-of-the-art single-objective optimization based methods, and two well-known multiobjective optimization based approaches.

  6. E85 Optimized Engine

    SciTech Connect

    Bower, Stanley

    2011-12-31

    A 5.0L V8 twin-turbocharged direct injection engine was designed, built, and tested for the purpose of assessing the fuel economy and performance in the F-Series pickup of the Dual Fuel engine concept and of an E85 optimized FFV engine. Additionally, production 3.5L gasoline turbocharged direct injection (GTDI) EcoBoost engines were converted to Dual Fuel capability and used to evaluate the cold start emissions and fuel system robustness of the Dual Fuel engine concept. Project objectives were: to develop a roadmap to demonstrate a minimized fuel economy penalty for an F-Series FFV truck with a highly boosted, high compression ratio spark ignition engine optimized to run with ethanol fuel blends up to E85; to reduce FTP 75 energy consumption by 15% - 20% compared to an equally powered vehicle with a current production gasoline engine; and to meet ULEV emissions, with a stretch target of ULEV II / Tier II Bin 4. All project objectives were met or exceeded.

  7. Optimization Methods in Sherpa

    NASA Astrophysics Data System (ADS)

    Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.

    2009-09-01

    Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).

  8. Optimal Synchronizability of Bearings

    NASA Astrophysics Data System (ADS)

    Araújo, N. A. M.; Seybold, H.; Baram, R. M.; Herrmann, H. J.; Andrade, J. S., Jr.

    2013-02-01

    Bearings are mechanical dissipative systems that, when perturbed, relax toward a synchronized (bearing) state. Here we find that bearings can be perceived as physical realizations of complex networks of oscillators with asymmetrically weighted couplings. Accordingly, these networks can exhibit optimal synchronization properties through fine-tuning of the local interaction strength as a function of node degree [Motter, Zhou, and Kurths, Phys. Rev. E 71, 016116 (2005)PLEEE81539-3755]. We show that, in analogy, the synchronizability of bearings can be maximized by counterbalancing the number of contacts and the inertia of their constituting rotor disks through the mass-radius relation, m˜rα, with an optimal exponent α=α× which converges to unity for a large number of rotors. Under this condition, and regardless of the presence of a long-tailed distribution of disk radii composing the mechanical system, the average participation per disk is maximized and the energy dissipation rate is homogeneously distributed among elementary rotors.

  9. [Optimizing surgical hand disinfection].

    PubMed

    Kampf, G; Kramer, A; Rotter, M; Widmer, A

    2006-08-01

    For more than 110 years hands of surgeons have been treated before a surgical procedure in order to reduce the bacterial density. The kind and duration of treatment, however, has changed significantly over time. Recent scientific evidence suggests a few changes with the aim to optimize both the efficacy and the dermal tolerance. Aim of this article is the presentation and discussion of new insights in surgical hand disinfection. A hand wash should be performed before the first disinfection of a day, ideally at least 10 min before the beginning of the disinfection as it has been shown that a 1 min hand wash significantly increases skin hydration for up to 10 min. The application time may be as short as 1.5 min depending on the type of hand rub. Hands and forearms should be kept wet with the hand rub for the recommended application time in any case. A specific rub-in procedure according to EN 12791 has been found to be suitable in order to avoid untreated skin areas. The alcohol-based hand rub should have a proven excellent dermal tolerance in order to ensure appropriate compliance. Considering these elements in clinical practice can have a significant impact to optimize the high quality of surgical hand disinfection for prevention of surgical site infections.

  10. Optimal synchronizability of networks

    NASA Astrophysics Data System (ADS)

    Wang, B.; Zhou, T.; Xiu, Z. L.; Kim, B. J.

    2007-11-01

    We numerically investigate how to enhance synchronizability of coupled identical oscillators in complex networks with research focus on the roles of the high level of clustering for a given heterogeneity in the degree distribution. By using the edge-exchange method with the fixed degree sequence, we first directly maximize synchronizability measured by the eigenratio of the coupling matrix, through the use of the so-called memory tabu search algorithm developed in applied mathematics. The resulting optimal network, which turns out to be weakly disassortative, is observed to exhibit a small modularity. More importantly, it is clearly revealed that the optimally synchronizable network for a given degree sequence shows a very low level of clustering, containing much fewer small-size loops than the original network. We then use the clustering coefficient as an object function to be reduced during the edge exchanges, and find it a very efficient way to enhance synchronizability. We thus conclude that under the condition of a given degree heterogeneity, the clustering plays a very important role in the network synchronization.

  11. Optimality and sub-optimality in a bacterial growth law.

    PubMed

    Towbin, Benjamin D; Korem, Yael; Bren, Anat; Doron, Shany; Sorek, Rotem; Alon, Uri

    2017-01-19

    Organisms adjust their gene expression to improve fitness in diverse environments. But finding the optimal expression in each environment presents a challenge. We ask how good cells are at finding such optima by studying the control of carbon catabolism genes in Escherichia coli. Bacteria show a growth law: growth rate on different carbon sources declines linearly with the steady-state expression of carbon catabolic genes. We experimentally modulate gene expression to ask if this growth law always maximizes growth rate, as has been suggested by theory. We find that the growth law is optimal in many conditions, including a range of perturbations to lactose uptake, but provides sub-optimal growth on several other carbon sources. Combining theory and experiment, we genetically re-engineer E. coli to make sub-optimal conditions into optimal ones and vice versa. We conclude that the carbon growth law is not always optimal, but represents a practical heuristic that often works but sometimes fails.

  12. Strong Combination of Ant Colony Optimization with Constraint Programming Optimization

    NASA Astrophysics Data System (ADS)

    Khichane, Madjid; Albert, Patrick; Solnon, Christine

    We introduce an approach which combines ACO (Ant Colony Optimization) and IBM ILOG CP Optimizer for solving COPs (Combinatorial Optimization Problems). The problem is modeled using the CP Optimizer modeling API. Then, it is solved in a generic way by a two-phase algorithm. The first phase aims at creating a hot start for the second: it samples the solution space and applies reinforcement learning techniques as implemented in ACO to create pheromone trails. During the second phase, CP Optimizer performs a complete tree search guided by the pheromone trails previously accumulated. The first experimental results on knapsack, quadratic assignment and maximum independent set problems show that this new algorithm enhances the performance of CP Optimizer alone.

  13. Particle swarm optimization for complex nonlinear optimization problems

    NASA Astrophysics Data System (ADS)

    Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos

    2016-06-01

    This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.

  14. Optimal Ski Jump

    NASA Astrophysics Data System (ADS)

    Rebilas, Krzysztof

    2013-02-01

    Consider a skier who goes down a takeoff ramp, attains a speed V, and jumps, attempting to land as far as possible down the hill below (Fig. 1). At the moment of takeoff the angle between the skier's velocity and the horizontal is α. What is the optimal angle α that makes the jump the longest possible for the fixed magnitude of the velocity V? Of course, in practice, this is a very sophisticated problem; the skier's range depends on a variety of complex factors in addition to V and α. However, if we ignore these and assume the jumper is in free fall between the takeoff ramp and the landing point below, the problem becomes an exercise in kinematics that is suitable for introductory-level students. The solution is presented here.

  15. The optimal target hemoglobin.

    PubMed

    Ritz, E; Schwenger, V

    2000-07-01

    There is still controversy concerning the optimal target hemoglobin during treatment with recombinant human erythropoietin (rHuEPO). Some evidence suggests that hemoglobin concentrations higher than currently recommended lead to improvements in cognitive function, physical performance, and rehabilitation. At least in patients with advanced cardiac disease, however, one controlled trial failed to show a benefit from normalizing predialysis hemoglobin concentrations. In contrast, preliminary observations in three additional studies (albeit with limited statistical power) failed to show adverse cardiovascular effects from normalization of hemoglobin, but definite benefit with respect to quality of life, physical performance, and cardiac geometry. These observations are consistent with the notion that hemoglobin concentrations higher than those recommended by the National Kidney Foundation Dialysis Outcomes Quality Initiative Anemia Work Group are beneficial, at least in patients without advanced cardiac disease.

  16. DENSE MEDIA CYCLONE OPTIMIZATION

    SciTech Connect

    Gerald H. Luttrell

    2002-01-14

    During the past quarter, float-sink analyses were completed for four of seven circuits evaluated in this project. According to the commercial laboratory, the analyses for the remaining three sites will be finished by mid February 2002. In addition, it was necessary to repeat several of the float-sink tests to resolve problems identified during the analysis of the experimental data. In terms of accomplishments, a website is being prepared to distribute project findings and software to the public. This site will include (i) an operators manual for HMC operation and maintenance (already available in hard copy), (ii) an expert system software package for evaluating and optimizing HMC performance (in development), and (iii) a spreadsheet-based process model for plant designers (in development). Several technology transfer activities were also carried out including the publication of project results in proceedings and the training of plant operations via workshops.

  17. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1989-06-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. We have now received all the equipment necessary for the flow visualization studies described over the last two progress reports. We have begun more detailed studies of the gas flow pattern within cyclones as detailed below. Third, we have begun studies of the effect of particle concentration on cyclone performance. This work is critical to application of our results to commercial operations. 1 fig.

  18. Optimal optoacoustic detector design

    NASA Technical Reports Server (NTRS)

    Rosengren, L.-G.

    1975-01-01

    Optoacoustic detectors are used to measure pressure changes occurring in enclosed gases, liquids, or solids being excited by intensity or frequency modulated electromagnetic radiation. Radiation absorption spectra, collisional relaxation rates, substance compositions, and reactions can be determined from the time behavior of these pressure changes. Very successful measurements of gaseous air pollutants have, for instance, been performed by using detectors of this type together with different lasers. The measuring instrument consisting of radiation source, modulator, optoacoustic detector, etc. is often called spectrophone. In the present paper, a thorough optoacoustic detector optimization analysis based upon a review of its theory of operation is introduced. New quantitative rules and suggestions explaining how to design detectors with maximal pressure responsivity and over-all sensitivity and minimal background signal are presented.

  19. Optimal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Mantri, Atul; Pérez-Delgado, Carlos A.; Fitzsimons, Joseph F.

    2013-12-01

    Blind quantum computation allows a client with limited quantum capabilities to interact with a remote quantum computer to perform an arbitrary quantum computation, while keeping the description of that computation hidden from the remote quantum computer. While a number of protocols have been proposed in recent years, little is currently understood about the resources necessary to accomplish the task. Here, we present general techniques for upper and lower bounding the quantum communication necessary to perform blind quantum computation, and use these techniques to establish concrete bounds for common choices of the client’s quantum capabilities. Our results show that the universal blind quantum computation protocol of Broadbent, Fitzsimons, and Kashefi, comes within a factor of (8)/(3) of optimal when the client is restricted to preparing single qubits. However, we describe a generalization of this protocol which requires exponentially less quantum communication when the client has a more sophisticated device.

  20. RLV Turbine Performance Optimization

    NASA Technical Reports Server (NTRS)

    Griffin, Lisa W.; Dorney, Daniel J.

    2001-01-01

    A task was developed at NASA/Marshall Space Flight Center (MSFC) to improve turbine aerodynamic performance through the application of advanced design and analysis tools. There are four major objectives of this task: 1) to develop, enhance, and integrate advanced turbine aerodynamic design and analysis tools; 2) to develop the methodology for application of the analytical techniques; 3) to demonstrate the benefits of the advanced turbine design procedure through its application to a relevant turbine design point; and 4) to verify the optimized design and analysis with testing. Final results of the preliminary design and the results of the two-dimensional (2D) detailed design of the first-stage vane of a supersonic turbine suitable for a reusable launch vehicle (R-LV) are presented. Analytical techniques for obtaining the results are also discussed.

  1. Optimality in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Nearing, Grey; Yatheendradas, Soni

    2016-04-01

    It costs a lot more to develop and launch an earth-observing satellite than it does to build a data assimilation system. As such, we propose that it is important to understand the efficiency of our assimilation algorithms at extracting information from remote sensing retrievals. To address this, we propose that it is necessary to adopt completely general definition of "optimality" that explicitly acknowledges all differences between the parametric constraints of our assimilation algorithm (e.g., Gaussianity, partial linearity, Markovian updates) and the true nature of the environmetnal system and observing system. In fact, it is not only possible, but incredibly straightforward, to measure the optimality (in this more general sense) of any data assimilation algorithm as applied to any intended model or natural system. We measure the information content of remote sensing data conditional on the fact that we are already running a model and then measure the actual information extracted by data assimilation. The ratio of the two is an efficiency metric, and optimality is defined as occurring when the data assimilation algorithm is perfectly efficient at extracting information from the retrievals. We measure the information content of the remote sensing data in a way that, unlike triple collocation, does not rely on any a priori presumed relationship (e.g., linear) between the retrieval and the ground truth, however, like triple-collocation, is insensitive to the spatial mismatch between point-based measurements and grid-scale retrievals. This theory and method is therefore suitable for use with both dense and sparse validation networks. Additionally, the method we propose is *constructive* in the sense that it provides guidance on how to improve data assimilation systems. All data assimilation strategies can be reduced to approximations of Bayes' law, and we measure the fractions of total information loss that are due to individual assumptions or approximations in the

  2. TFCX shielding optimization

    SciTech Connect

    Yang, S.; Gohar, Y.

    1985-01-01

    Design analyses and tradeoff studies for the bulk shield of the Tokamak Fusion Core Experiment (TFCX) were performed. Several shielding options were considered to lower the capital cost of the shielding system. Optimization analyses were carried out to reduce the nuclear responses in the TF coils and the dose equivalent in the reactor hall one day after shutdown. Two TFCX designs with different toroidal field (TF) coil configurations were considered during this work. The materials for the shield were selected based upon tradeoff studies and the results from the previous design studies. The main shielding materials are water, concrete, and steel balls (Fe1422 or Nitronic 33). Small amounts of boron carbide and lead are employed to reduce activation, nuclear heating in the TF coils, and dose equivalent after shutdown.

  3. Optimized nanoporous materials.

    SciTech Connect

    Braun, Paul V.; Langham, Mary Elizabeth; Jacobs, Benjamin W.; Ong, Markus D.; Narayan, Roger J.; Pierson, Bonnie E.; Gittard, Shaun D.; Robinson, David B.; Ham, Sung-Kyoung; Chae, Weon-Sik; Gough, Dara V.; Wu, Chung-An Max; Ha, Cindy M.; Tran, Kim L.

    2009-09-01

    Nanoporous materials have maximum practical surface areas for electrical charge storage; every point in an electrode is within a few atoms of an interface at which charge can be stored. Metal-electrolyte interfaces make best use of surface area in porous materials. However, ion transport through long, narrow pores is slow. We seek to understand and optimize the tradeoff between capacity and transport. Modeling and measurements of nanoporous gold electrodes has allowed us to determine design principles, including the fact that these materials can deplete salt from the electrolyte, increasing resistance. We have developed fabrication techniques to demonstrate architectures inspired by these principles that may overcome identified obstacles. A key concept is that electrodes should be as close together as possible; this is likely to involve an interpenetrating pore structure. However, this may prove extremely challenging to fabricate at the finest scales; a hierarchically porous structure can be a worthy compromise.

  4. Optimal inverse functions created via population-based optimization.

    PubMed

    Jennings, Alan L; Ordóñez, Raúl

    2014-06-01

    Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.

  5. Public optimism towards nanomedicine

    PubMed Central

    Bottini, Massimo; Rosato, Nicola; Gloria, Fulvia; Adanti, Sara; Corradino, Nunziella; Bergamaschi, Antonio; Magrini, Andrea

    2011-01-01

    Background Previous benefit–risk perception studies and social experiences have clearly demonstrated that any emerging technology platform that ignores benefit–risk perception by citizens might jeopardize its public acceptability and further development. The aim of this survey was to investigate the Italian judgment on nanotechnology and which demographic and heuristic variables were most influential in shaping public perceptions of the benefits and risks of nanotechnology. Methods In this regard, we investigated the role of four demographic (age, gender, education, and religion) and one heuristic (knowledge) predisposing factors. Results The present study shows that gender, education, and knowledge (but not age and religion) influenced the Italian perception of how nanotechnology will (positively or negatively) affect some areas of everyday life in the next twenty years. Furthermore, the picture that emerged from our study is that Italian citizens, despite minimal familiarity with nanotechnology, showed optimism towards nanotechnology applications, especially those related to health and medicine (nanomedicine). The high regard for nanomedicine was tied to the perception of risks associated with environmental and societal implications (division among social classes and increased public expenses) rather than health issues. However, more highly educated people showed greater concern for health issues but this did not decrease their strong belief about the benefits that nanotechnology would bring to medical fields. Conclusion The results reported here suggest that public optimism towards nanomedicine appears to justify increased scientific effort and funding for medical applications of nanotechnology. It also obligates toxicologists, politicians, journalists, entrepreneurs, and policymakers to establish a more responsible dialog with citizens regarding the nature and implications of this emerging technology platform. PMID:22267931

  6. OPTIMAL NETWORK TOPOLOGY DESIGN

    NASA Technical Reports Server (NTRS)

    Yuen, J. H.

    1994-01-01

    This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.

  7. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations

  8. Metacognitive control and optimal learning.

    PubMed

    Son, Lisa K; Sethi, Rajiv

    2006-07-08

    The notion of optimality is often invoked informally in the literature on metacognitive control. We provide a precise formulation of the optimization problem and show that optimal time allocation strategies depend critically on certain characteristics of the learning environment, such as the extent of time pressure, and the nature of the uptake function. When the learning curve is concave, optimality requires that items at lower levels of initial competence be allocated greater time. On the other hand, with logistic learning curves, optimal allocations vary with time availability in complex and surprising ways. Hence there are conditions under which optimal strategies will be relatively easy to uncover, and others in which suboptimal time allocation might be expected. The model can therefore be used to address the question of whether and when learners should be able to exercise good metacognitive control in practice.

  9. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  10. Optimal management strategies in variable environments: Stochastic optimal control methods

    USGS Publications Warehouse

    Williams, B.K.

    1985-01-01

    Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both

  11. Recent developments in multilevel optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garret N.; Kim, D.-S.

    1989-01-01

    Recent developments in multilevel optimization are briefly reviewed. The general nature of the multilevel design task, the use of approximations to develop and solve the analysis design task, the structure of the formal multidiscipline optimization problem, a simple cantilevered beam which demonstrates the concepts of multilevel design and the basic mathematical details of the optimization task and the system level are among the topics discussed.

  12. Optimal Control of Electrodynamic Tethers

    DTIC Science & Technology

    2008-06-01

    method.46 Even though the derivation that produced Eq. (11) required integration over a hypothetical integer number of revolutions, the optimizer ... approach to multi-revolution, long time scale optimal control of an electrodynamic tether is investigated for a tethered satellite system in Low Earth...time scale approach is used to capture the effects of the Earth’s rotating tilted magnetic field. Optimal control solutions are achieved using a

  13. GAPS IN SUPPORT VECTOR OPTIMIZATION

    SciTech Connect

    STEINWART, INGO; HUSH, DON; SCOVEL, CLINT; LIST, NICOLAS

    2007-01-29

    We show that the stopping criteria used in many support vector machine (SVM) algorithms working on the dual can be interpreted as primal optimality bounds which in turn are known to be important for the statistical analysis of SVMs. To this end we revisit the duality theory underlying the derivation of the dual and show that in many interesting cases primal optimality bounds are the same as known dual optimality bounds.

  14. Optimality Functions in Stochastic Programming

    DTIC Science & Technology

    2009-12-02

    nonconvex. Non - convex stochastic optimization problems arise in such diverse applications as estimation of mixed logit models [2], engineering design...first- order necessary optimality conditions ; see for example Propositions 3.3.1 and 3.3.5 in [7] or Theorem 2.2.4 in [25]. If the evaluation of f j...procedures for validation analysis of a candidate point x ∈ IRn. Since P may be nonconvex, we focus on first-order necessary optimality conditions as

  15. Stochastic Optimization of Complex Systems

    SciTech Connect

    Birge, John R.

    2014-03-20

    This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in equilibrium models, improvement of Monte Carlo convergence through the use of a new method of sample batch optimization, the development of new parallel processing methods for stochastic unit commitment models, and the development of improved methods in combination with parallel processing for incorporating automatic differentiation methods into optimization.

  16. Structural Optimization in automotive design

    NASA Technical Reports Server (NTRS)

    Bennett, J. A.; Botkin, M. E.

    1984-01-01

    Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.

  17. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  18. Optimal Reconfiguration of Tetrahedral Formations

    NASA Technical Reports Server (NTRS)

    Huntington, Geoffrey; Rao, Anil V.; Hughes, Steven P.

    2004-01-01

    The problem of minimum-fuel formation reconfiguration for the Magnetospheric Multi-Scale (MMS) mission is studied. This reconfiguration trajectory optimization problem can be posed as a nonlinear optimal control problem. In this research, this optimal control problem is solved using a spectral collocation method called the Gauss pseudospectral method. The objective of this research is to provide highly accurate minimum-fuel solutions to the MMS formation reconfiguration problem and to gain insight into the underlying structure of fuel-optimal trajectories.

  19. Optimal Jet Finder

    NASA Astrophysics Data System (ADS)

    Grigoriev, D. Yu.; Jankowski, E.; Tkachov, F. V.

    2003-09-01

    We describe a FORTRAN 77 implementation of the optimal jet definition for identification of jets in hadronic final states of particle collisions. We discuss details of the implementation, explain interface subroutines and provide a usage example. The source code is available from http://www.inr.ac.ru/~ftkachov/projects/jets/. Program summaryTitle of program: Optimal Jet Finder (OJF_014) Catalogue identifier: ADSB Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSB Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: Any computer with the FORTRAN 77 compiler Tested with: g77/Linux on Intel, Alpha and Sparc; Sun f77/Solaris (thwgs.cern.ch); xlf/AIX (rsplus.cern.ch); MS Fortran PowerStation 4.0/Win98 Programming language used: FORTRAN 77 Memory required: ˜1 MB (or more, depending on the settings) Number of bytes in distributed program, including examples and test data: 251 463 Distribution format: tar gzip file Keywords: Hadronic jets, jet finding algorithms Nature of physical problem: Analysis of hadronic final states in high energy particle collision experiments often involves identification of hadronic jets. A large number of hadrons detected in the calorimeter is reduced to a few jets by means of a jet finding algorithm. The jets are used in further analysis which would be difficult or impossible when applied directly to the hadrons. Grigoriev et al. [ hep-ph/0301185] provide a brief introduction to the subject of jet finding algorithms and a general review of the physics of jets can be found in [Rep. Prog. Phys. 36 (1993) 1067]. Method of solution: The software we provide is an implementation of the so-called optimal jet definition ( OJD). The theory of OJD was developed by Tkachov [Phys. Rev. Lett. 73 (1994) 2405; 74 (1995) 2618; Int. J. Mod. Phys. A 12 (1997) 5411; 17 (2002) 2783]. The desired jet configuration is obtained as the one that minimizes Ω R, a certain function of the input particles and jet

  20. Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model

    NASA Astrophysics Data System (ADS)

    Wiezel, Oren; Or, Yizhar

    2016-11-01

    Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.

  1. Stability versus Optimality

    NASA Astrophysics Data System (ADS)

    Inanloo, B.

    2011-12-01

    The Caspian Sea is considered to be the largest inland body of water in the world, which located between the Caucasus Mountains and Central Asia. The Caspian Sea has been a source of the most contentious international conflicts between five littoral states now borders the sea: Azerbaijan, Iran, Kazakhstan, Russia, and Turkmenistan. The conflict over the legal status of this international body of water as an aftermath of the breakup of the Soviet Union in 1991. Since then the parties have been negotiating without coming up with any agreement neither on the ownerships of waters, nor the oil and natural gas beneath them. The number of involved stakeholders, the unusual characteristics of the Caspian Sea in considering it as a lake or a sea, and a large number of external parties are interested in the valuable resources of the Sea has made this conflict complex and unique. This paper intends to apply methods to find the best allocation schemes considering acceptability and stability of selected solution to share the Caspian Sea and its resources fairly and efficiently. Although, there are several allocation methods in solving such allocation problems, however, most of those seek a socially optimal solution that can satisfy majority of criteria or decision makers, while, in practice, especially in multi-nation problems, such solution may not be necessarily a stable solution and to be acceptable to all parties. Hence, there is need to apply a method that considers stability and acceptability of solutions to find a solution with high chance to be agreed upon that. Application of some distance-based methods in studying the Caspian Sea conflict provides some policy insights useful for finding solutions that can resolve the dispute. In this study, we use methods such as Goal Programming, Compromise Programming, and considering stability of solution the logic of Power Index is used to find a division rule that is stable negotiators. The results of this study shows that the

  2. RNA based evolutionary optimization

    NASA Astrophysics Data System (ADS)

    Schuster, Peter

    1993-12-01

    . Evolutionary optimization of two-letter sequences in thus more difficult than optimization in the world of natural RNA sequences with four bases. This fact might explain the usage of four bases in the genetic language of nature. Finally we study the mapping from RNA sequences into secondary structures and explore the topology of RNA shape space. We find that ‘neutral paths’ connecting neighbouring sequences with identical structures go very frequently through entire sequence space. Sequences folding into common structures are found everywhere in sequence space. Hence, evolution can migrate to almost every part of sequence space without ‘hill climbing’ and only small fractions of the entire number of sequences have to be searched in order to find suitable structures.

  3. Industrial cogeneration optimization program

    SciTech Connect

    Not Available

    1980-01-01

    The purpose of this program was to identify up to 10 good near-term opportunities for cogeneration in 5 major energy-consuming industries which produce food, textiles, paper, chemicals, and refined petroleum; select, characterize, and optimize cogeneration systems for these identified opportunities to achieve maximum energy savings for minimum investment using currently available components of cogenerating systems; and to identify technical, institutional, and regulatory obstacles hindering the use of industrial cogeneration systems. The analysis methods used and results obtained are described. Plants with fuel demands from 100,000 Btu/h to 3 x 10/sup 6/ Btu/h were considered. It was concluded that the major impediments to industrial cogeneration are financial, e.g., high capital investment and high charges by electric utilities during short-term cogeneration facility outages. In the plants considered an average energy savings from cogeneration of 15 to 18% compared to separate generation of process steam and electric power was calculated. On a national basis for the 5 industries considered, this extrapolates to saving 1.3 to 1.6 quads per yr or between 630,000 to 750,000 bbl/d of oil. Properly applied, federal activity can do much to realize a substantial fraction of this potential by lowering the barriers to cogeneration and by stimulating wider implementation of this technology. (LCL)

  4. Genetically optimizing weather predictions

    NASA Astrophysics Data System (ADS)

    Potter, S. B.; Staats, Kai; Romero-Colmenero, Encarni

    2016-07-01

    humidity, air pressure, wind speed and wind direction) into a database. Built upon this database, we have developed a remarkably simple approach to derive a functional weather predictor. The aim is provide up to the minute local weather predictions in order to e.g. prepare dome environment conditions ready for night time operations or plan, prioritize and update weather dependent observing queues. In order to predict the weather for the next 24 hours, we take the current live weather readings and search the entire archive for similar conditions. Predictions are made against an averaged, subsequent 24 hours of the closest matches for the current readings. We use an Evolutionary Algorithm to optimize our formula through weighted parameters. The accuracy of the predictor is routinely tested and tuned against the full, updated archive to account for seasonal trends and total, climate shifts. The live (updated every 5 minutes) SALT weather predictor can be viewed here: http://www.saao.ac.za/ sbp/suthweather_predict.html

  5. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1989-03-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. This quarter, we have been hampered somewhat by flow delivery of the bubble generation system and arc lighting system placed on order last fall. This equipment is necessary to map the flow field within cyclones using the techniques described in last quarter's report. Using the bubble generator, we completed this quarter a study of the natural length'' of cyclones of 18 different configurations, each configuration operated at five different gas flows. Results suggest that the equation by Alexander for natural length is incorrect; natural length as measured with the bubble generation system is always below the bottom of the cyclones regardless of the cyclone configuration or gas flow, within the limits of the experimental cyclones tested. This finding is important because natural length is a term in equations used to predict cyclone efficiency. 1 tab.

  6. Optimal Phase Oscillatory Network

    NASA Astrophysics Data System (ADS)

    Follmann, Rosangela

    2013-03-01

    Important topics as preventive detection of epidemics, collective self-organization, information flow and systemic robustness in clusters are typical examples of processes that can be studied in the context of the theory of complex networks. It is an emerging theory in a field, which has recently attracted much interest, involving the synchronization of dynamical systems associated to nodes, or vertices, of the network. Studies have shown that synchronization in oscillatory networks depends not only on the individual dynamics of each element, but also on the combination of the topology of the connections as well as on the properties of the interactions of these elements. Moreover, the response of the network to small damages, caused at strategic points, can enhance the global performance of the whole network. In this presentation we explore an optimal phase oscillatory network altered by an additional term in the coupling function. The application to associative-memory network shows improvement on the correct information retrieval as well as increase of the storage capacity. The inclusion of some small deviations on the nodes, when solutions are attracted to a false state, results in additional enhancement of the performance of the associative-memory network. Supported by FAPESP - Sao Paulo Research Foundation, grant number 2012/12555-4

  7. Optimized System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Longman, Richard W.

    1999-01-01

    In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.

  8. Boiler modeling optimizes sootblowing

    SciTech Connect

    Piboontum, S.J.; Swift, S.M.; Conrad, R.S.

    2005-10-01

    Controlling the cleanliness and limiting the fouling and slagging of heat transfer surfaces are absolutely necessary to optimize boiler performance. The traditional way to clean heat-transfer surfaces is by sootblowing using air, steam, or water at regular intervals. But with the advent of fuel-switching strategies, such as switching to PRB coal to reduce a plant's emissions, the control of heating surface cleanliness has become more problematic for many owners of steam generators. Boiler modeling can help solve that problem. The article describes Babcock & Wilcox's Powerclean modeling system which consists of heating surface models that produce real-time cleanliness indexes. The Heat Transfer Manager (HTM) program is the core of the system, which can be used on any make or model of boiler. A case study is described to show how the system was successfully used at the 1,350 MW Unit 2 of the American Electric Power's Rockport Power Plant in Indiana. The unit fires a blend of eastern bituminous and Powder River Basin coal. 5 figs.

  9. Optimization of SRF Linacs

    SciTech Connect

    Powers, Tom

    2013-09-01

    This work describes preliminary results of a new software tool that allows one to vary parameters and understand the effects on the optimized costs of construction plus 10 year operations of an SRF linac, the associated cryogenic facility, and controls, where operations includes the cost of the electrical utilities but not the labor or other costs. It derives from collaborative work done with staff from Accelerator Science and Technology Centre, Daresbury, UK several years ago while they were in the process of developing a conceptual design for the New Light Source project.[1] The initial goal was to convert a spread sheet format to a graphical interface to allow the ability to sweep different parameter sets. The tools also allow one to compare the cost of the different facets of the machine design and operations so as to better understand the tradeoffs. The work was first published in an ICFA Beam Dynamics News Letter.[2] More recent additions to the software include the ability to save and restore input parameters as well as to adjust the Qo versus E parameters in order to explore the potential costs savings associated with doing so. Additionally, program changes now allow one to model the costs associated with a linac that makes use of energy recovery mode of operation.

  10. Optimizing haemodialysate composition

    PubMed Central

    Locatelli, Francesco; La Milia, Vincenzo; Violo, Leano; Del Vecchio, Lucia; Di Filippo, Salvatore

    2015-01-01

    Survival and quality of life of dialysis patients are strictly dependent on the quality of the haemodialysis (HD) treatment. In this respect, dialysate composition, including water purity, plays a crucial role. A major aim of HD is to normalize predialysis plasma electrolyte and mineral concentrations, while minimizing wide swings in the patient's intradialytic plasma concentrations. Adequate sodium (Na) and water removal is critical for preventing intra- and interdialytic hypotension and pulmonary edema. Avoiding both hyper- and hypokalaemia prevents life-threatening cardiac arrhythmias. Optimal calcium (Ca) and magnesium (Mg) dialysate concentrations may protect the cardiovascular system and the bones, preventing extraskeletal calcifications, severe secondary hyperparathyroidism and adynamic bone disease. Adequate bicarbonate concentration [HCO3−] maintains a stable pH in the body fluids for appropriate protein and membrane functioning and also protects the bones. An adequate dialysate glucose concentration prevents severe hyperglycaemia and life-threating hypoglycaemia, which can lead to severe cardiovascular complications and a worsening of diabetic comorbidities. PMID:26413285

  11. Optimally Squeezed Spin States

    NASA Astrophysics Data System (ADS)

    Rojo, Alberto

    2004-03-01

    We consider optimally spin-squeezed states that maximize the sensitivity of the Ramsey spectroscopy, and for which the signal to noise ratio scales as the number of particles N. Using the variational principle we prove that these states are eigensolutions of the Hamiltonian H(λ)=λ S_z^2-S_x, and that, for large N, the states become equivalent to the quadrature squeezed states of the harmonic oscillator. We present numerical results that illustrate the validity of the equivalence. We also present results of spin squeezing via atom-field interactions within the context of the Tavis-Cummings model. An ensemble of N two-level atoms interacts with a quantized cavity field. For all the atoms initially in their ground states, it is shown that spin squeezing of both the atoms and the field can be achieved provided the initial state of the cavity field has coherence between number states differing by 2. Most of the discussion is restricted to the case of a cavity field initially in a coherent state, but initial squeezed states for the field are also discussed. An analytic solution is found that is valid in the limit that the number of atoms is much greater than unity. References: A. G. Rojo, Phys. Rev A, 68, 013807 (2003); Claudiu Genes, P. R. Berman, and A. G. Rojo Phys. Rev. A 68, 043809 (2003).

  12. Sweeping Jet Optimization Studies

    NASA Technical Reports Server (NTRS)

    Melton, LaTunia Pack; Koklu, Mehti; Andino, Marlyn; Lin, John C.; Edelman, Louis

    2016-01-01

    Progress on experimental efforts to optimize sweeping jet actuators for active flow control (AFC) applications with large adverse pressure gradients is reported. Three sweeping jet actuator configurations, with the same orifice size but di?erent internal geometries, were installed on the flap shoulder of an unswept, NACA 0015 semi-span wing to investigate how the output produced by a sweeping jet interacts with the separated flow and the mechanisms by which the flow separation is controlled. For this experiment, the flow separation was generated by deflecting the wing's 30% chord trailing edge flap to produce an adverse pressure gradient. Steady and unsteady pressure data, Particle Image Velocimetry data, and force and moment data were acquired to assess the performance of the three actuator configurations. The actuator with the largest jet deflection angle, at the pressure ratios investigated, was the most efficient at controlling flow separation on the flap of the model. Oil flow visualization studies revealed that the flow field controlled by the sweeping jets was more three-dimensional than expected. The results presented also show that the actuator spacing was appropriate for the pressure ratios examined.

  13. Query Evaluation: Strategies and Optimizations.

    ERIC Educational Resources Information Center

    Turtle, Howard; Flood, James

    1995-01-01

    Discusses two query evaluation strategies used in large text retrieval systems: (1) term-at-a-time; and (2) document-at-a-time. Describes optimization techniques that can reduce query evaluation costs. Presents simulation results that compare the performance of these optimization techniques when applied to natural language query evaluation. (JMV)

  14. Optimal Inputs for System Identification.

    DTIC Science & Technology

    1995-09-01

    The derivation of the power spectral density of the optimal input for system identification is addressed in this research. Optimality is defined in...identification potential of general System Identification algorithms, a new and efficient System Identification algorithm that employs Iterated Weighted Least

  15. A Problem on Optimal Transportation

    ERIC Educational Resources Information Center

    Cechlarova, Katarina

    2005-01-01

    Mathematical optimization problems are not typical in the classical curriculum of mathematics. In this paper we show how several generalizations of an easy problem on optimal transportation were solved by gifted secondary school pupils in a correspondence mathematical seminar, how they can be used in university courses of linear programming and…

  16. Optimizing Medical Kits for Spaceflight

    NASA Technical Reports Server (NTRS)

    Keenan, A. B,; Foy, Millennia; Myers, G.

    2014-01-01

    The Integrated Medical Model (IMM) is a probabilistic model that estimates medical event occurrences and mission outcomes for different mission profiles. IMM simulation outcomes describing the impact of medical events on the mission may be used to optimize the allocation of resources in medical kits. Efficient allocation of medical resources, subject to certain mass and volume constraints, is crucial to ensuring the best outcomes of in-flight medical events. We implement a new approach to this medical kit optimization problem. METHODS We frame medical kit optimization as a modified knapsack problem and implement an algorithm utilizing a dynamic programming technique. Using this algorithm, optimized medical kits were generated for 3 different mission scenarios with the goal of minimizing the probability of evacuation and maximizing the Crew Health Index (CHI) for each mission subject to mass and volume constraints. Simulation outcomes using these kits were also compared to outcomes using kits optimized..RESULTS The optimized medical kits generated by the algorithm described here resulted in predicted mission outcomes more closely approached the unlimited-resource scenario for Crew Health Index (CHI) than the implementation in under all optimization priorities. Furthermore, the approach described here improves upon in reducing evacuation when the optimization priority is minimizing the probability of evacuation. CONCLUSIONS This algorithm provides an efficient, effective means to objectively allocate medical resources for spaceflight missions using the Integrated Medical Model.

  17. Trajectory optimization using regularized variables

    NASA Technical Reports Server (NTRS)

    Lewallen, J. M.; Szebehely, V.; Tapley, B. D.

    1969-01-01

    Regularized equations for a particular optimal trajectory are compared with unregularized equations with respect to computational characteristics, using perturbation type numerical optimization. In the case of the three dimensional, low thrust, Earth-Jupiter rendezvous, the regularized equations yield a significant reduction in computer time.

  18. Supply-Chain Optimization Template

    NASA Technical Reports Server (NTRS)

    Quiett, William F.; Sealing, Scott L.

    2009-01-01

    The Supply-Chain Optimization Template (SCOT) is an instructional guide for identifying, evaluating, and optimizing (including re-engineering) aerospace- oriented supply chains. The SCOT was derived from the Supply Chain Council s Supply-Chain Operations Reference (SCC SCOR) Model, which is more generic and more oriented toward achieving a competitive advantage in business.

  19. C-21 Fleet: Base Optimization

    DTIC Science & Technology

    assigned to the operational support airlift mission, located at Andrews Air Force Base, Maryland and Scott Air Force Base, Illinois. The missions flown... Scott and Andrews AFB is the optimal assignment. If nine total assets were optimized, five would be assigned to Scott AFB and four to Andrews AFB

  20. Optimized dynamic rotation with wedges.

    PubMed

    Rosen, I I; Morrill, S M; Lane, R G

    1992-01-01

    Dynamic rotation is a computer-controlled therapy technique utilizing an automated multileaf collimator in which the radiation beam shape changes dynamically as the treatment machine rotates about the patient so that at each instant the beam shape matches the projected shape of the target volume. In simple dynamic rotation, the dose rate remains constant during rotation. For optimized dynamic rotation, the dose rate is varied as a function of gantry angle. Optimum dose rate at each gantry angle is computed by linear programming. Wedges can be included in the optimized dynamic rotation therapy by using additional rotations. Simple and optimized dynamic rotation treatment plans, with and without wedges, for a pancreatic tumor have been compared using optimization cost function values, normal tissue complication probabilities, and positive difference statistic values. For planning purposes, a continuous rotation is approximated by static beams at a number of gantry angles equally spaced about the patient. In theory, the quality of optimized treatment planning solutions should improve as the number of static beams increases. The addition of wedges should further improve dose distributions. For the case studied, no significant improvements were seen for more than 36 beam angles. Open and wedged optimized dynamic rotations were better than simple dynamic rotation, but wedged optimized dynamic rotation showed no definitive improvement over open beam optimized dynamic rotation.

  1. Optimized layout generator for microgyroscope

    NASA Astrophysics Data System (ADS)

    Tay, Francis E.; Li, Shifeng; Logeeswaran, V. J.; Ng, David C.

    2000-10-01

    This paper presents an optimized out-of-plane microgyroscope layout generator using AutoCAD R14 and MS ExcelTM as a first attempt to automating the design of resonant micro- inertial sensors. The out-of-plane microgyroscope with two degrees of freedom lumped parameter model was chosen as the synthesis topology. Analytical model for the open loop operating has been derived for the gyroscope performance characteristics. Functional performance parameters such as sensitivity are ensured to be satisfied while simultaneously optimizing a design objective such as minimum area. A single algorithm will optimize the microgyroscope dimensions, while simultaneously maximizing or minimizing the objective functions: maximum sensitivity and minimum area. The multi- criteria objective function and optimization methodology was implemented using the Generalized Reduced Gradient algorithm. For data conversion a DXF to GDS converter was used. The optimized theoretical design performance parameters show good agreement with finite element analysis.

  2. Optimal dynamic detection of explosives

    SciTech Connect

    Moore, David Steven; Mcgrane, Shawn D; Greenfield, Margo T; Scharff, R J; Rabitz, Herschel A; Roslund, J

    2009-01-01

    The detection of explosives is a notoriously difficult problem, especially at stand-off distances, due to their (generally) low vapor pressure, environmental and matrix interferences, and packaging. We are exploring optimal dynamic detection to exploit the best capabilities of recent advances in laser technology and recent discoveries in optimal shaping of laser pulses for control of molecular processes to significantly enhance the standoff detection of explosives. The core of the ODD-Ex technique is the introduction of optimally shaped laser pulses to simultaneously enhance sensitivity of explosives signatures while reducing the influence of noise and the signals from background interferents in the field (increase selectivity). These goals are being addressed by operating in an optimal nonlinear fashion, typically with a single shaped laser pulse inherently containing within it coherently locked control and probe sub-pulses. With sufficient bandwidth, the technique is capable of intrinsically providing orthogonal broad spectral information for data fusion, all from a single optimal pulse.

  3. Optimal Distinctiveness Signals Membership Trust.

    PubMed

    Leonardelli, Geoffrey J; Loyd, Denise Lewin

    2016-07-01

    According to optimal distinctiveness theory, sufficiently small minority groups are associated with greater membership trust, even among members otherwise unknown, because the groups are seen as optimally distinctive. This article elaborates on the prediction's motivational and cognitive processes and tests whether sufficiently small minorities (defined by relative size; for example, 20%) are associated with greater membership trust relative to mere minorities (45%), and whether such trust is a function of optimal distinctiveness. Two experiments, examining observers' perceptions of minority and majority groups and using minimal groups and (in Experiment 2) a trust game, revealed greater membership trust in minorities than majorities. In Experiment 2, participants also preferred joining minorities over more powerful majorities. Both effects occurred only when minorities were 20% rather than 45%. In both studies, perceptions of optimal distinctiveness mediated effects. Discussion focuses on the value of relative size and optimal distinctiveness, and when membership trust manifests.

  4. Optimized design for PIGMI

    SciTech Connect

    Hansborough, L.; Hamm, R.; Stovall, J.; Swenson, D.

    1980-01-01

    PIGMI (Pion Generator for Medical Irradiations) is a compact linear proton accelerator design, optimized for pion production and cancer treatment use in a hospital environment. Technology developed during a four-year PIGMI Prototype experimental program allows the design of smaller, less expensive, and more reliable proton linacs. A new type of low-energy accelerating structure, the radio-frequency quadrupole (RFQ) has been tested; it produces an exceptionally good-quality beam and allows the use of a simple 30-kV injector. Average axial electric-field gradients of over 9 MV/m have been demonstrated in a drift-tube linac (DTL) structure. Experimental work is underway to test the disk-and-washer (DAW) structure, another new type of accelerating structure for use in the high-energy coupled-cavity linac (CCL). Sufficient experimental and developmental progress has been made to closely define an actual PIGMI. It will consist of a 30-kV injector, and RFQ linac to a proton energy of 2.5 MeV, a DTL linac to 125 MeV, and a CCL linac to the final energy of 650 MeV. The total length of the accelerator is 133 meters. The RFQ and DTL will be driven by a single 440-MHz klystron; the CCL will be driven by six 1320-MHz klystrons. The peak beam current is 28 mA. The beam pulse length is 60 ..mu..s at a 60-Hz repetition rate, resulting in a 100-..mu..A average beam current. The total cost of the accelerator is estimated to be approx. $10 million.

  5. Optimization of the Structures at Shakedown and Rosen's Optimality Criterion

    NASA Astrophysics Data System (ADS)

    Alawdin, Piotr; Atkociunas, Juozas; Liepa, Liudas

    2016-09-01

    Paper focuses on the problems of application of extreme energy principles and nonlinear mathematical programing in the theory of structural shakedown. By means of energy principles, which describes the true stress-strain state conditions of the structure, the dual mathematical models of analysis problems are formed (static and kinematic formulations). It is shown how common mathematical model of the structures optimization at shakedown with safety and serviceability constraints (according to the ultimate limit state (ULS) and serviceability limit state (SLS) requirements) on the basis of previously mentioned mathematical models is formed. The possibilities of optimization problem solution in the context of physical interpretation of optimality criterion of Rosen's algorithm are analyzed.

  6. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use.

  7. Optimal Multiobjective Design of Digital Filters Using Taguchi Optimization Technique

    NASA Astrophysics Data System (ADS)

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2014-01-01

    The multiobjective design of digital filters using the powerful Taguchi optimization technique is considered in this paper. This relatively new optimization tool has been recently introduced to the field of engineering and is based on orthogonal arrays. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the Taguchi optimization technique produced filters that fulfill the desired characteristics and are of practical use.

  8. Biocapacity optimization in regional planning

    NASA Astrophysics Data System (ADS)

    Guo, Jianjun; Yue, Dongxia; Li, Kai; Hui, Cang

    2017-01-01

    Ecological overshoot has been accelerating across the globe. Optimizing biocapacity has become a key to resolve the overshoot of ecological demand in regional sustainable development. However, most literature has focused on reducing ecological footprint but ignores the potential of spatial optimization of biocapacity through regional planning of land use. Here we develop a spatial probability model and present four scenarios for optimizing biocapacity of a river basin in Northwest China. The potential of enhanced biocapacity and its effects on ecological overshoot and water consumption in the region were explored. Two scenarios with no restrictions on croplands and water use reduced the overshoot by 29 to 53%, and another two scenarios which do not allow croplands and water use to increase worsened the overshoot by 11 to 15%. More spatially flexible transition rules of land use led to higher magnitude of change after optimization. However, biocapacity optimization required a large amount of additional water resources, casting considerable pressure on the already water-scarce socio-ecological system. Our results highlight the potential for policy makers to manage/optimize regional land use which addresses ecological overshoot. Investigation on the feasibility of such spatial optimization complies with the forward-looking policies for sustainable development and deserves further attention.

  9. Biocapacity optimization in regional planning

    PubMed Central

    Guo, Jianjun; Yue, Dongxia; Li, Kai; Hui, Cang

    2017-01-01

    Ecological overshoot has been accelerating across the globe. Optimizing biocapacity has become a key to resolve the overshoot of ecological demand in regional sustainable development. However, most literature has focused on reducing ecological footprint but ignores the potential of spatial optimization of biocapacity through regional planning of land use. Here we develop a spatial probability model and present four scenarios for optimizing biocapacity of a river basin in Northwest China. The potential of enhanced biocapacity and its effects on ecological overshoot and water consumption in the region were explored. Two scenarios with no restrictions on croplands and water use reduced the overshoot by 29 to 53%, and another two scenarios which do not allow croplands and water use to increase worsened the overshoot by 11 to 15%. More spatially flexible transition rules of land use led to higher magnitude of change after optimization. However, biocapacity optimization required a large amount of additional water resources, casting considerable pressure on the already water-scarce socio-ecological system. Our results highlight the potential for policy makers to manage/optimize regional land use which addresses ecological overshoot. Investigation on the feasibility of such spatial optimization complies with the forward-looking policies for sustainable development and deserves further attention. PMID:28112224

  10. Optimizing WFIRST Coronagraph Science

    NASA Astrophysics Data System (ADS)

    Macintosh, Bruce

    We propose an in-depth scientific investigation that will define how the WFIRST coronagraphic instrument will discover and characterize nearby planetary systems and how it will use observations of planets and disks to probe the diversity of their compositions, dynamics, and formation. Given the enormous diversity of known planetary systems it is not enough to optimize a coronagraph mission plan for the characterization of solar system analogs. Instead, we must design a mission to characterize a wide variety of planets, from gas and ice giant planets at a range of separations to mid-sized planets with no analogs in our solar system. We must consider updated planet distributions based on the results of the Kepler mission, long-term radial velocity (RV) surveys and updated luminosity distributions of exo-zodiacal dust from interferometric thermal infrared surveys of nearby stars. The properties of all these objects must be informed by our best models of planets and disks, and the process of using WFIRST observations to measure fundamental planetary properties such as composition must derive from rigorous methods. Our team brings a great depth of expertise to inform and accomplish these and all of the other tasks enumerated in the SIT proposal call. We will perform end-to-end modeling that starts with model spectra of planets and images of disks, simulates WFIRST data using these models, accounts for geometries of specific star / planet / disk systems, and incorporates detailed instrument performance models. We will develop and implement data analysis techniques to extract well-calibrated astrophysical signals from complex data, and propose observing plans that maximize the mission's scientific yield. We will work with the community to build observing programs and target lists, inform them of WFIRSTs capabilities, and supply simulated scientific observations for data challenges. Our work will be informed by the experience we have gained from building and observing with

  11. Optimal Flow Control Design

    NASA Technical Reports Server (NTRS)

    Allan, Brian; Owens, Lewis

    2010-01-01

    In support of the Blended-Wing-Body aircraft concept, a new flow control hybrid vane/jet design has been developed for use in a boundary-layer-ingesting (BLI) offset inlet in transonic flows. This inlet flow control is designed to minimize the engine fan-face distortion levels and the first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. This concept represents a potentially enabling technology for quieter and more environmentally friendly transport aircraft. An optimum vane design was found by minimizing the engine fan-face distortion, DC60, and the first five Fourier harmonic half amplitudes, while maximizing the total pressure recovery. The optimal vane design was then used in a BLI inlet wind tunnel experiment at NASA Langley's 0.3-meter transonic cryogenic tunnel. The experimental results demonstrated an 80-percent decrease in DPCPavg, the reduction in the circumferential distortion levels, at an inlet mass flow rate corresponding to the middle of the operational range at the cruise condition. Even though the vanes were designed at a single inlet mass flow rate, they performed very well over the entire inlet mass flow range tested in the wind tunnel experiment with the addition of a small amount of jet flow control. While the circumferential distortion was decreased, the radial distortion on the outer rings at the aerodynamic interface plane (AIP) increased. This was a result of the large boundary layer being distributed from the bottom of the AIP in the baseline case to the outer edges of the AIP when using the vortex generator (VG) vane flow control. Experimental results, as already mentioned, showed an 80-percent reduction of DPCPavg, the circumferential distortion level at the engine fan-face. The hybrid approach leverages strengths of vane and jet flow control devices, increasing inlet performance over a broader operational range with significant reduction in mass flow requirements. Minimal distortion level requirements

  12. Optimal Protocols and Optimal Transport in Stochastic Thermodynamics

    NASA Astrophysics Data System (ADS)

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-01

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  13. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  14. Optimal energy growth and optimal control in swept Hiemenz flow

    NASA Astrophysics Data System (ADS)

    Guégan, Alan; Schmid, Peter J.; Huerre, Patrick

    2006-11-01

    The objective of the study is first to examine the optimal transient growth of Görtler Hämmerlin perturbations in swept Hiemenz flow. This configuration constitutes a model of the flow in the attachment-line boundary layer at the leading-edge of swept wings. The optimal blowing and suction at the wall which minimizes the energy of the optimal perturbations is then determined. An adjoint-based optimization procedure applicable to both problems is devised, which relies on the maximization or minimization of a suitable objective functional. The variational analysis is carried out in the framework of the set of linear partial differential equations governing the chordwise and wall-normal velocity fluctuations. Energy amplifications of up to three orders of magnitude are achieved at low spanwise wavenumbers (k {˜} 0.1) and large sweep Reynolds number (textit{Re} {˜} 2000). Optimal perturbations consist of spanwise travelling chordwise vortices, with a vorticity distribution which is inclined against the sweep. Transient growth arises from the tilting of the vorticity distribution by the spanwise shear via a two-dimensional Orr mechanism acting in the basic flow dividing plane. Two distinct regimes have been identified: for k {≤sssim} 0.25, vortex dipoles are formed which induce large spanwise perturbation velocities; for k {gtrsim} 0.25, dipoles are not observed and only the Orr mechanism remains active. The optimal wall blowing control yields for instance an 80% decrease of the maximum perturbation kinetic energy reached by optimal disturbances at textit{Re} {=} 550 and k {=} 0.25. The optimal wall blowing pattern consists of spanwise travelling waves which follow the naturally occurring vortices and qualitatively act in the same manner as a more simple constant gain feedback control strategy.

  15. Method of constrained global optimization

    NASA Astrophysics Data System (ADS)

    Altschuler, Eric Lewin; Williams, Timothy J.; Ratner, Edward R.; Dowla, Farid; Wooten, Frederick

    1994-04-01

    We present a new method for optimization: constrained global optimization (CGO). CGO iteratively uses a Glauber spin flip probability and the Metropolis algorithm. The spin flip probability allows changing only the values of variables contributing excessively to the function to be minimized. We illustrate CGO with two problems-Thomson's problem of finding the minimum-energy configuration of unit charges on a spherical surface, and a problem of assigning offices-for which CGO finds better minima than other methods. We think CGO will apply to a wide class of optimization problems.

  16. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  17. Metabolism at Evolutionary Optimal States

    PubMed Central

    Rabbers, Iraes; van Heerden, Johan H.; Nordholt, Niclas; Bachmann, Herwig; Teusink, Bas; Bruggeman, Frank J.

    2015-01-01

    Metabolism is generally required for cellular maintenance and for the generation of offspring under conditions that support growth. The rates, yields (efficiencies), adaptation time and robustness of metabolism are therefore key determinants of cellular fitness. For biotechnological applications and our understanding of the evolution of metabolism, it is necessary to figure out how the functional system properties of metabolism can be optimized, via adjustments of the kinetics and expression of enzymes, and by rewiring metabolism. The trade-offs that can occur during such optimizations then indicate fundamental limits to evolutionary innovations and bioengineering. In this paper, we review several theoretical and experimental findings about mechanisms for metabolic optimization. PMID:26042723

  18. An Efficient Chemical Reaction Optimization Algorithm for Multiobjective Optimization.

    PubMed

    Bechikh, Slim; Chaabani, Abir; Ben Said, Lamjed

    2015-10-01

    Recently, a new metaheuristic called chemical reaction optimization was proposed. This search algorithm, inspired by chemical reactions launched during collisions, inherits several features from other metaheuristics such as simulated annealing and particle swarm optimization. This fact has made it, nowadays, one of the most powerful search algorithms in solving mono-objective optimization problems. In this paper, we propose a multiobjective variant of chemical reaction optimization, called nondominated sorting chemical reaction optimization, in an attempt to exploit chemical reaction optimization features in tackling problems involving multiple conflicting criteria. Since our approach is based on nondominated sorting, one of the main contributions of this paper is the proposal of a new quasi-linear average time complexity quick nondominated sorting algorithm; thereby making our multiobjective algorithm efficient from a computational cost viewpoint. The experimental comparisons against several other multiobjective algorithms on a variety of benchmark problems involving various difficulties show the effectiveness and the efficiency of this multiobjective version in providing a well-converged and well-diversified approximation of the Pareto front.

  19. Optimality and sub-optimality in a bacterial growth law

    PubMed Central

    Towbin, Benjamin D.; Korem, Yael; Bren, Anat; Doron, Shany; Sorek, Rotem; Alon, Uri

    2017-01-01

    Organisms adjust their gene expression to improve fitness in diverse environments. But finding the optimal expression in each environment presents a challenge. We ask how good cells are at finding such optima by studying the control of carbon catabolism genes in Escherichia coli. Bacteria show a growth law: growth rate on different carbon sources declines linearly with the steady-state expression of carbon catabolic genes. We experimentally modulate gene expression to ask if this growth law always maximizes growth rate, as has been suggested by theory. We find that the growth law is optimal in many conditions, including a range of perturbations to lactose uptake, but provides sub-optimal growth on several other carbon sources. Combining theory and experiment, we genetically re-engineer E. coli to make sub-optimal conditions into optimal ones and vice versa. We conclude that the carbon growth law is not always optimal, but represents a practical heuristic that often works but sometimes fails. PMID:28102224

  20. Energy Criteria for Resource Optimization

    ERIC Educational Resources Information Center

    Griffith, J. W.

    1973-01-01

    Resource optimization in building design is based on the total system over its expected useful life. Alternative environmental systems can be evaluated in terms of resource costs and goal effectiveness. (Author/MF)

  1. Putting combustion optimization to work

    SciTech Connect

    Spring, N.

    2009-05-15

    New plants and plants that are retrofitting can benefit from combustion optimization. Boiler tuning and optimization can complement each other. The continuous emissions monitoring system CEMS, and tunable diode laser absorption spectroscopy TDLAS can be used for optimisation. NeuCO's CombustionOpt neural network software can determine optimal fuel and air set points. Babcock and Wilcox Power Generation Group Inc's Flame Doctor can be used in conjunction with other systems to diagnose and correct coal-fired burner performance. The four units of the Colstrip power plant in Colstrips, Montana were recently fitted with combustion optimization systems based on advanced model predictive multi variable controls (MPCs), ABB's Predict & Control tool. Unit 4 of Tampa Electric's Big Bend plant in Florida is fitted with Emerson's SmartProcess fuzzy neural model based combustion optimisation system. 1 photo.

  2. Nonlinear optimization for stochastic simulations.

    SciTech Connect

    Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.

    2003-12-01

    This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.

  3. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  4. Dual approximations in optimal control

    NASA Technical Reports Server (NTRS)

    Hager, W. W.; Ianculescu, G. D.

    1984-01-01

    A dual approximation for the solution to an optimal control problem is analyzed. The differential equation is handled with a Lagrange multiplier while other constraints are treated explicitly. An algorithm for solving the dual problem is presented.

  5. Optimal solar sail planetocentric trajectories

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.

    1977-01-01

    The analysis of solar sail planetocentric optimal trajectory problem is described. A computer program was produced to calculate optimal trajectories for a limited performance analysis. A square sail model is included and some consideration is given to a heliogyro sail model. Orbit to a subescape point and orbit to orbit transfer are considered. Trajectories about the four inner planets can be calculated and shadowing, oblateness, and solar motion may be included. Equinoctial orbital elements are used to avoid the classical singularities, and the method of averaging is applied to increase computational speed. Solution of the two-point boundary value problem which arises from the application of optimization theory is accomplished with a Newton procedure. Time optimal trajectories are emphasized, but a penalty function has been considered to prevent trajectories which intersect a planet's surface.

  6. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  7. CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY

    SciTech Connect

    BERGMAN, T. B.; STEFANSKI, L. D.; SEELEY, P. N.; ZINSLI, L. C.; CUSACK, L. J.

    2012-09-19

    THE CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY WAS CONDUCTED TO DEVELOP AN OPTIMAL SEQUENCE OF REMEDIATION ACTIVITIES IMPLEMENTING THE CERCLA DECISION ON THE CENTRAL PLATEAU. THE STUDY DEFINES A SEQUENCE OF ACTIVITIES THAT RESULT IN AN EFFECTIVE USE OF RESOURCES FROM A STRATEGIC PERSPECTIVE WHEN CONSIDERING EQUIPMENT PROCUREMENT AND STAGING, WORKFORCE MOBILIZATION/DEMOBILIZATION, WORKFORCE LEVELING, WORKFORCE SKILL-MIX, AND OTHER REMEDIATION/DISPOSITION PROJECT EXECUTION PARAMETERS.

  8. MISO - Mixed Integer Surrogate Optimization

    SciTech Connect

    Mueller, Juliane

    2016-01-20

    MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.

  9. Design optimization of space structures

    NASA Astrophysics Data System (ADS)

    Felippa, Carlos

    1991-11-01

    The topology-shape-size optimization of space structures is investigated through Kikuchi's homogenization method. The method starts from a 'design domain block,' which is a region of space into which the structure is to materialize. This domain is initially filled with a finite element mesh, typically regular. Force and displacement boundary conditions corresponding to applied loads and supports are applied at specific points in the domain. An optimal structure is to be 'carved out' of the design under two conditions: (1) a cost function is to be minimized, and (2) equality or inequality constraints are to be satisfied. The 'carving' process is accomplished by letting microstructure holes develop and grow in elements during the optimization process. These holes have a rectangular shape in two dimensions and a cubical shape in three dimensions, and may also rotate with respect to the reference axes. The properties of the perforated element are obtained through an homogenization procedure. Once a hole reaches the volume of the element, that element effectively disappears. The project has two phases. In the first phase the method was implemented as the combination of two computer programs: a finite element module, and an optimization driver. In the second part, focus is on the application of this technique to planetary structures. The finite element part of the method was programmed for the two-dimensional case using four-node quadrilateral elements to cover the design domain. An element homogenization technique different from that of Kikuchi and coworkers was implemented. The optimization driver is based on an augmented Lagrangian optimizer, with the volume constraint treated as a Courant penalty function. The optimizer has to be especially tuned to this type of optimization because the number of design variables can reach into the thousands. The driver is presently under development.

  10. Optimality Functions and Lopsided Convergence

    DTIC Science & Technology

    2015-03-16

    Problems involving functions defined in terms of integrals or optimization problems (as the maxi - mization in Example 3), functions defined on infinite...optimization methods in finite time. The key technical challenge associate with the above scheme is to establish ( weak ) consistency. In the next...Theorem 4.3. In view of this result, it is clear that ( weak ) consistency will be ensured by epi-convergence of the approximating objective functions and

  11. Optimal encryption of quantum bits

    SciTech Connect

    Boykin, P. Oscar; Roychowdhury, Vwani

    2003-04-01

    We show that 2n random classical bits are both necessary and sufficient for encrypting any unknown state of n quantum bits in an informationally secure manner. We also characterize the complete set of optimal protocols in terms of a set of unitary operations that comprise an orthonormal basis in a canonical inner product space. Moreover, a connection is made between quantum encryption and quantum teleportation that allows for a different proof of optimality of teleportation.

  12. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  13. A systolic array optimizing compiler

    SciTech Connect

    Lam, M.S. )

    1988-01-01

    This book documents the research and results of the compiler technology developed for the Warp machine. A major challenge in the development of Warp was to build an optimizing compiler for the machine. This book describes a compiler that shields most of the difficulty from the user and generates very efficient code. Several new optimizations are described and evaluated. The research described confirms that compilers play a valuable role in the development, usage and effectiveness of novel high-performance architectures.

  14. Optimization of neutron imaging plate

    NASA Astrophysics Data System (ADS)

    Haga, Y. K.; Neriishi, K.; Takahashi, K.; Niimura, N.

    2002-07-01

    Considering the elementary processes of neutron detection occurring in the neutron imaging plate (NIP) has optimized the performance of NIP. For these processes, the color center creation efficiencies ( ɛcc values) have been experimentally determined with NIPs which have different mole fraction of photostimulated (PSL) material ( φPSL values) and different thickness ( t). The effectiveness of the optimization procedure has been demonstrated by the measurement of the neutron diffraction intensities from a hen egg-white lysozyme protein crystal.

  15. Unrealistic optimism: East and west?

    PubMed

    Joshi, Mary Sissons; Carter, Wakefield

    2013-01-01

    Following Weinstein's (1980) pioneering work many studies established that people have an optimistic bias concerning future life events. At first, the bulk of research was conducted using populations in North America and Northern Europe, the optimistic bias was thought of as universal, and little attention was paid to cultural context. However, construing unrealistic optimism as a form of self-enhancement, some researchers noted that it was far less common in East Asian cultures. The current study extends enquiry to a different non-Western culture. Two hundred and eighty seven middle aged and middle income participants (200 in India, 87 in England) rated 11 positive and 11 negative events in terms of the chances of each event occurring in "their own life," and the chances of each event occurring in the lives of "people like them." Comparative optimism was shown for bad events, with Indian participants showing higher levels of optimism than English participants. The position regarding comparative optimism for good events was more complex. In India those of higher socioeconomic status (SES) were optimistic, while those of lower SES were on average pessimistic. Overall, English participants showed neither optimism nor pessimism for good events. The results, whose clinical relevance is discussed, suggest that the expression of unrealistic optimism is shaped by an interplay of culture and socioeconomic circumstance.

  16. Optimal control of motorsport differentials

    NASA Astrophysics Data System (ADS)

    Tremlett, A. J.; Massaro, M.; Purdy, D. J.; Velenis, E.; Assadian, F.; Moore, A. P.; Halley, M.

    2015-12-01

    Modern motorsport limited slip differentials (LSD) have evolved to become highly adjustable, allowing the torque bias that they generate to be tuned in the corner entry, apex and corner exit phases of typical on-track manoeuvres. The task of finding the optimal torque bias profile under such varied vehicle conditions is complex. This paper presents a nonlinear optimal control method which is used to find the minimum time optimal torque bias profile through a lane change manoeuvre. The results are compared to traditional open and fully locked differential strategies, in addition to considering related vehicle stability and agility metrics. An investigation into how the optimal torque bias profile changes with reduced track-tyre friction is also included in the analysis. The optimal LSD profile was shown to give a performance gain over its locked differential counterpart in key areas of the manoeuvre where a quick direction change is required. The methodology proposed can be used to find both optimal passive LSD characteristics and as the basis of a semi-active LSD control algorithm.

  17. Efficient computation of optimal actions.

    PubMed

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  18. Optimal lattice-structured materials

    NASA Astrophysics Data System (ADS)

    Messner, Mark C.

    2016-11-01

    This work describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describing the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.

  19. Pyomo : Python Optimization Modeling Objects.

    SciTech Connect

    Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul

    2010-11-01

    The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.

  20. Optimal lattice-structured materials

    DOE PAGES

    Messner, Mark C.

    2016-07-09

    This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less

  1. Optimal lattice-structured materials

    SciTech Connect

    Messner, Mark C.

    2016-07-09

    This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describing the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.

  2. Unrealistic Optimism: East and West?

    PubMed Central

    Joshi, Mary Sissons; Carter, Wakefield

    2013-01-01

    Following Weinstein’s (1980) pioneering work many studies established that people have an optimistic bias concerning future life events. At first, the bulk of research was conducted using populations in North America and Northern Europe, the optimistic bias was thought of as universal, and little attention was paid to cultural context. However, construing unrealistic optimism as a form of self-enhancement, some researchers noted that it was far less common in East Asian cultures. The current study extends enquiry to a different non-Western culture. Two hundred and eighty seven middle aged and middle income participants (200 in India, 87 in England) rated 11 positive and 11 negative events in terms of the chances of each event occurring in “their own life,” and the chances of each event occurring in the lives of “people like them.” Comparative optimism was shown for bad events, with Indian participants showing higher levels of optimism than English participants. The position regarding comparative optimism for good events was more complex. In India those of higher socioeconomic status (SES) were optimistic, while those of lower SES were on average pessimistic. Overall, English participants showed neither optimism nor pessimism for good events. The results, whose clinical relevance is discussed, suggest that the expression of unrealistic optimism is shaped by an interplay of culture and socioeconomic circumstance. PMID:23407689

  3. Efficient computation of optimal actions

    PubMed Central

    Todorov, Emanuel

    2009-01-01

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462

  4. Optimal BLS: Optimizing transit-signal detection for Keplerian dynamics

    NASA Astrophysics Data System (ADS)

    Ofir, Aviv

    2015-08-01

    Transit surveys, both ground- and space-based, have already accumulated a large number of light curves that span several years. We optimize the search for transit signals for both detection and computational efficiencies by assuming that the searched systems can be described by Keplerian, and propagating the effects of different system parameters to the detection parameters. Importnantly, we mainly consider the information content of the transit signal and not any specific algorithm - and use BLS (Kovács, Zucker, & Mazeh 2002) just as a specific example.We show that the frequency information content of the light curve is primarily determined by the duty cycle of the transit signal, and thus the optimal frequency sampling is found to be cubic and not linear. Further optimization is achieved by considering duty-cycle dependent binning of the phased light curve. By using the (standard) BLS, one is either fairly insensitive to long-period planets or less sensitive to short-period planets and computationally slower by a significant factor of ~330 (for a 3 yr long dataset). We also show how the physical system parameters, such as the host star's size and mass, directly affect transit detection. This understanding can then be used to optimize the search for every star individually.By considering Keplerian dynamics explicitly rather than implicitly one can optimally search the transit signal parameter space. The presented Optimal BLS enhances the detectability of both very short and very long period planets, while allowing such searches to be done with much reduced resources and time. The Matlab/Octave source code for Optimal BLS is made available.

  5. A novel metaheuristic for continuous optimization problems: Virus optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue

    2016-01-01

    A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.

  6. Schedule path optimization for adiabatic quantum computing and optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Lishan; Zhang, Jun; Sarovar, Mohan

    2016-04-01

    Adiabatic quantum computing and optimization have garnered much attention recently as possible models for achieving a quantum advantage over classical approaches to optimization and other special purpose computations. Both techniques are probabilistic in nature and the minimum gap between the ground state and first excited state of the system during evolution is a major factor in determining the success probability. In this work we investigate a strategy for increasing the minimum gap and success probability by introducing intermediate Hamiltonians that modify the evolution path between initial and final Hamiltonians. We focus on an optimization problem relevant to recent hardware implementations and present numerical evidence for the existence of a purely local intermediate Hamiltonian that achieve the optimum performance in terms of pushing the minimum gap to one of the end points of the evolution. As a part of this study we develop a convex optimization formulation of the search for optimal adiabatic schedules that makes this computation more tractable, and which may be of independent interest. We further study the effectiveness of random intermediate Hamiltonians on the minimum gap and success probability, and empirically find that random Hamiltonians have a significant probability of increasing the success probability, but only by a modest amount.

  7. Optimal Arrangement of Components Via Pairwise Rearrangements.

    DTIC Science & Technology

    1987-10-01

    reliability function under component pairwise rearrangement. They use this property to find the optimal component arrangement. Worked examples illustrate the methods proposed. Keywords: Optimization; Permutations; Nodes.

  8. Remediation Optimization: Definition, Scope and Approach

    EPA Pesticide Factsheets

    This document provides a general definition, scope and approach for conducting optimization reviews within the Superfund Program and includes the fundamental principles and themes common to optimization.

  9. Optimal singular control with applications to trajectory optimization

    NASA Technical Reports Server (NTRS)

    Vinh, N. X.

    1977-01-01

    A comprehensive discussion of the problem of singular control is presented. Singular control enters an optimal trajectory when the so called switching function vanishes identically over a finite time interval. Using the concept of domain of maneuverability, the problem of optical switching is analyzed. Criteria for the optimal direction of switching are presented. The switching, or junction, between nonsingular and singular subarcs is examined in detail. Several theorems concerning the necessary, and also sufficient conditions for smooth junction are presented. The concepts of quasi-linear control and linearized control are introduced. They are designed for the purpose of obtaining approximate solution for the difficult Euler-Lagrange type of optimal control in the case where the control is nonlinear.

  10. On optimal velocity during cycling.

    PubMed

    Maroński, R

    1994-02-01

    This paper focuses on the solution of two problems related to cycling. One is to determine the velocity as a function of distance which minimizes the cyclist's energy expenditure in covering a given distance in a set time. The other is to determine the velocity as a function of the distance which minimizes time for fixed energy expenditure. To solve these problems, an equation of motion for the cyclist riding over arbitrary terrain is written using Newton's second law. This equation is used to evaluate either energy expenditure or time, and the minimization problems are solved using an optimal control formulation in conjunction with the method of Miele [Optimization Techniques with Applications to Aerospace Systems, pp. 69-98 (1962) Academic Press, New York]. Solutions to both optimal control problems are the same. The solutions are illustrated through two examples. In one example where the relative wind velocity is zero, the optimal cruising velocity is constant regardless of terrain. In the second, where the relative wind velocity fluctuates, the optimal cruising velocity varies.

  11. Recent Advances in Stellarator Optimization

    NASA Astrophysics Data System (ADS)

    Gates, David; Brown, T.; Breslau, J.; Landreman, M.; Lazerson, S. A.; Mynick, H.; Neilson, G. H.; Pomphrey, N.

    2016-10-01

    Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. One criticism that has been levelled at this method of design is the complexity of the resultant field coils. Recently, a new coil optimization code, COILOPT + + , was written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. We have also explored possibilities for generating an experimental database that could check whether the reduction in turbulent transport that is predicted by GENE as a function of local shear would be consistent with experiments. To this end, a series of equilibria that can be made in the now latent QUASAR experiment have been identified. This work was supported by U.S. DoE Contract #DE-AC02-09CH11466.

  12. Optimization of the magnetic dynamo.

    PubMed

    Willis, Ashley P

    2012-12-21

    In stars and planets, magnetic fields are believed to originate from the motion of electrically conducting fluids in their interior, through a process known as the dynamo mechanism. In this Letter, an optimization procedure is used to simultaneously address two fundamental questions of dynamo theory: "Which velocity field leads to the most magnetic energy growth?" and "How large does the velocity need to be relative to magnetic diffusion?" In general, this requires optimization over the full space of continuous solenoidal velocity fields possible within the geometry. Here the case of a periodic box is considered. Measuring the strength of the flow with the root-mean-square amplitude, an optimal velocity field is shown to exist, but without limitation on the strain rate, optimization is prone to divergence. Measuring the flow in terms of its associated dissipation leads to the identification of a single optimal at the critical magnetic Reynolds number necessary for a dynamo. This magnetic Reynolds number is found to be only 15% higher than that necessary for transient growth of the magnetic field.

  13. Current Trends in Multidrug Optimization.

    PubMed

    Weiss, Andrea; Nowak-Sliwinska, Patrycja

    2016-12-01

    The identification of effective and long-lasting cancer therapies still remains elusive, partially due to patient and tumor heterogeneity, acquired drug resistance, and single-drug dose-limiting toxicities. The use of drug combinations may help to overcome some limitations of current cancer therapies by challenging the robustness and redundancy of biological processes. However, effective drug combination optimization requires the careful consideration of numerous parameters. The complexity of this optimization problem is clearly nontrivial and likely requires the assistance of advanced heuristic optimization techniques. In the current review, we discuss the application of optimization techniques for the identification of optimal drug combinations. More specifically, we focus on the application of phenotype-based screening approaches in the field of cancer therapy. These methods are divided into three categories: (1) modeling methods, (2) model-free approaches based on biological search algorithms, and (3) merged approaches, particularly phenotypically driven network biology methods and computation network models relying on phenotypic data. In addition to a brief description of each approach, we include a critical discussion of the advantages and disadvantages of each method, with a strong focus on the limitations and considerations needed to successfully apply such methods in biological research.

  14. Optimal design of solidification processes

    NASA Technical Reports Server (NTRS)

    Dantzig, Jonathan A.; Tortorelli, Daniel A.

    1991-01-01

    An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.

  15. Machine Translation Evaluation and Optimization

    NASA Astrophysics Data System (ADS)

    Dorr, Bonnie; Olive, Joseph; McCary, John; Christianson, Caitlin

    The evaluation of machine translation (MT) systems is a vital field of research, both for determining the effectiveness of existing MT systems and for optimizing the performance of MT systems. This part describes a range of different evaluation approaches used in the GALE community and introduces evaluation protocols and methodologies used in the program. We discuss the development and use of automatic, human, task-based and semi-automatic (human-in-the-loop) methods of evaluating machine translation, focusing on the use of a human-mediated translation error rate HTER as the evaluation standard used in GALE. We discuss the workflow associated with the use of this measure, including post editing, quality control, and scoring. We document the evaluation tasks, data, protocols, and results of recent GALE MT Evaluations. In addition, we present a range of different approaches for optimizing MT systems on the basis of different measures. We outline the requirements and specific problems when using different optimization approaches and describe how the characteristics of different MT metrics affect the optimization. Finally, we describe novel recent and ongoing work on the development of fully automatic MT evaluation metrics that have the potential to substantially improve the effectiveness of evaluation and optimization of MT systems.

  16. Multivariate optimization of production systems

    SciTech Connect

    Carroll, J.A.; Horne, R.N. )

    1992-07-01

    This paper reports that mathematically, optimization involves finding the extreme values of a function. Given a function of several variables, Z = {integral}({rvec x}{sub 1}, {rvec x}{sub 2},{rvec x}{sub 3},{yields}x{sub n}), an optimization scheme will find the combination of these variables that produces an extreme value in the function, whether it is a minimum or a maximum value. Many examples of optimization exist. For instance, if a function gives and investor's expected return on the basis of different investments, numerical optimization of the function will determine the mix of investments that will yield the maximum expected return. This is the basis of modern portfolio theory. If a function gives the difference between a set of data and a model of the data, numerical optimization of the function will produce the best fit of the model to the data. This is the basis for nonlinear parameter estimation. Similar examples can be given for network analysis, queuing theory, decision analysis, etc.

  17. Systematic Propulsion Optimization Tools (SPOT)

    NASA Technical Reports Server (NTRS)

    Bower, Mark; Celestian, John

    1992-01-01

    This paper describes a computer program written by senior-level Mechanical Engineering students at the University of Alabama in Huntsville which is capable of optimizing user-defined delivery systems for carrying payloads into orbit. The custom propulsion system is designed by the user through the input of configuration, payload, and orbital parameters. The primary advantages of the software, called Systematic Propulsion Optimization Tools (SPOT), are a user-friendly interface and a modular FORTRAN 77 code designed for ease of modification. The optimization of variables in an orbital delivery system is of critical concern in the propulsion environment. The mass of the overall system must be minimized within the maximum stress, force, and pressure constraints. SPOT utilizes the Design Optimization Tools (DOT) program for the optimization techniques. The SPOT program is divided into a main program and five modules: aerodynamic losses, orbital parameters, liquid engines, solid engines, and nozzles. The program is designed to be upgraded easily and expanded to meet specific user needs. A user's manual and a programmer's manual are currently being developed to facilitate implementation and modification.

  18. Event valence and unrealistic optimism.

    PubMed

    Gold, Ron S; Martyn, Kate

    2003-06-01

    The effect of event valence on unrealistic optimism was studied. 94 Deakin University students rated the comparative likelihood that they would experience either a controllable or an uncontrollable health-related event. Valence was manipulated to be positive (outcome was desirable) or negative (outcome was undesirable) by varying the way a given event was framed. Participants either were told the conditions which promote the event and rated the comparative likelihood they would experience it or were told the conditions which prevent the event and rated the comparative likelihood they would avoid it. For both the controllable and the uncontrollable events, unrealistic optimism was greater for negative than positive valence. It is suggested that a combination of the 'motivational account' of unrealistic optimism and prospect theory provides a good explanation of the results.

  19. Business process optimization for RHIOs.

    PubMed

    Soti, Praveen; Pandey, Seema

    2007-01-01

    Implementation of an electronic health record (EHR) network entails significant changes in the business processes of participating organizations. Business process management, increased automation, process optimization, user training and end-user adoption together form the keys to success with an EHR. Redesigned processes should be mapped to benefit lines and performance indicators, and monitored continuously to identify improvement opportunities. It is important the new business work flows should match, if not exceed, the existing benchmarks for performance. Business process redesign is all the more challenging in the context of regional health information organizations (RHIOs), as the business processes of the EHR network have to be aligned with existing process flows of several organizations, each with its own preferences and specific requirements. Even so, most of the discrete individual processes have to be converged, streamlined, assimilated and optimized in the redesigned business processes. This paper proposes a methodology for business process redesign and optimization for RHIOs.

  20. Optimizing Stellarators for Turbulent Transport

    SciTech Connect

    H.E. Mynick, N.Pomphrey, and P. Xanthopoulos

    2010-05-27

    Up to now, the term "transport-optimized" stellarators has meant optimized to minimize neoclassical transport, while the task of also mitigating turbulent transport, usually the dominant transport channel in such designs, has not been addressed, due to the complexity of plasma turbulence in stellarators. Here, we demonstrate that stellarators can also be designed to mitigate their turbulent transport, by making use of two powerful numerical tools not available until recently, namely gyrokinetic codes valid for 3D nonlinear simulations, and stellarator optimization codes. A first proof-of-principle configuration is obtained, reducing the level of ion temperature gradient turbulent transport from the NCSX baseline design by a factor of about 2.5.

  1. Optimization of polarization lidar structure

    NASA Astrophysics Data System (ADS)

    Abramochkin, Alexander I.; Kaul, Bruno V.; Tikhomirov, Alexander A.

    1999-11-01

    The problems of the polarization lidar transceiver optimization are considered. The basic features and the optimization criteria of lidar polarization units are presented and the comparative analysis of polarization units is fulfilled. We have analyzed optical arrangements of the transmitter to form the desired polarization state of sounding radiation. We have also considered various types of lidar receiving systems: (1) one-channel, providing measurement of Stocks parameters at a successive change of position of polarization analyzers in the lidar receiver, and (2) multichannel, where each channel has a lens, an analyzer, and a photodetector. In the latter case measurements of Stocks parameters are carried out simultaneously. The optimization criteria of the polarization lidar considering the atmospheric state are determined with the purpose to decrease the number of polarization devices needed.

  2. Excitation optimization for damage detection

    SciTech Connect

    Bement, Matthew T; Bewley, Thomas R

    2009-01-01

    A technique is developed to answer the important question: 'Given limited system response measurements and ever-present physical limits on the level of excitation, what excitation should be provided to a system to make damage most detectable?' Specifically, a method is presented for optimizing excitations that maximize the sensitivity of output measurements to perturbations in damage-related parameters estimated with an extended Kalman filter. This optimization is carried out in a computationally efficient manner using adjoint-based optimization and causes the innovations term in the extended Kalman filter to be larger in the presence of estimation errors, which leads to a better estimate of the damage-related parameters in question. The technique is demonstrated numerically on a nonlinear 2 DOF system, where a significant improvement in the damage-related parameter estimation is observed.

  3. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.

    1999-01-01

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.

  4. Optimal randomized scheduling by replacement

    SciTech Connect

    Saias, I.

    1996-05-01

    In the replacement scheduling problem, a system is composed of n processors drawn from a pool of p. The processors can become faulty while in operation and faulty processors never recover. A report is issued whenever a fault occurs. This report states only the existence of a fault but does not indicate its location. Based on this report, the scheduler can reconfigure the system and choose another set of n processors. The system operates satisfactorily as long as, upon report of a fault, the scheduler chooses n non-faulty processors. We provide a randomized protocol maximizing the expected number of faults the system can sustain before the occurrence of a crash. The optimality of the protocol is established by considering a closely related dual optimization problem. The game-theoretic technical difficulties that we solve in this paper are very general and encountered whenever proving the optimality of a randomized algorithm in parallel and distributed computation.

  5. Optimality, reduction and collective motion

    PubMed Central

    Justh, Eric W.; Krishnaprasad, P. S.

    2015-01-01

    The planar self-steering particle model of agents in a collective gives rise to dynamics on the N-fold direct product of SE(2), the rigid motion group in the plane. Assuming a connected, undirected graph of interaction between agents, we pose a family of symmetric optimal control problems with a coupling parameter capturing the strength of interactions. The Hamiltonian system associated with the necessary conditions for optimality is reducible to a Lie–Poisson dynamical system possessing interesting structure. In particular, the strong coupling limit reveals additional (hidden) symmetry, beyond the manifest one used in reduction: this enables explicit integration of the dynamics, and demonstrates the presence of a ‘master clock’ that governs all agents to steer identically. For finite coupling strength, we show that special solutions exist with steering controls proportional across the collective. These results suggest that optimality principles may provide a framework for understanding imitative behaviours observed in certain animal aggregations. PMID:27547087

  6. Integrated solar energy system optimization

    NASA Astrophysics Data System (ADS)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  7. Accelerating optimization by tracing valley

    NASA Astrophysics Data System (ADS)

    Li, Qing-Xiao; He, Rong-Qiang; Lu, Zhong-Yi

    2016-06-01

    We propose an algorithm to accelerate optimization when an objective function locally resembles a long narrow valley. In such a case, a conventional optimization algorithm usually wanders with too many tiny steps in the valley. The new algorithm approximates the valley bottom locally by a parabola that is obtained by fitting a set of successive points generated recently by a conventional optimization method. Then large steps are taken along the parabola, accompanied by fine adjustment to trace the valley bottom. The effectiveness of the new algorithm has been demonstrated by accelerating the Newton trust-region minimization method and the Levenberg-Marquardt method on the nonlinear fitting problem in exact diagonalization dynamical mean-field theory and on the classic minimization problem of the Rosenbrock's function. Many times speedup has been achieved for both problems, showing the high efficiency of the new algorithm.

  8. Numerical optimization using flow equations.

    PubMed

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  9. Numerical optimization using flow equations

    NASA Astrophysics Data System (ADS)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  10. Two concepts of therapeutic optimism

    PubMed Central

    Jansen, Lynn A

    2011-01-01

    Researchers and ethicists have long been concerned about the expectations for direct medical benefit expressed by participants in early phase clinical trials. Early work on the issue considered the possibility that participants misunderstand the purpose of clinical research or that they are misinformed about the prospects for medical benefit from these trials. Recently, however, attention has turned to the possibility that research participants are simply expressing optimism or hope about their participation in these trials. The ethical significance of this therapeutic optimism remains unclear. This paper argues that there are two distinct phenomena that can be associated with the term ‘therapeutic optimism’—one is ethically benign and the other is potentially worrisome. Distinguishing these two phenomena is crucial for understanding the nature and ethical significance of therapeutic optimism. The failure to draw a distinction between these phenomena also helps to explain why different writers on the topic often speak past one another. PMID:21551464

  11. Optimal sensor placement in structural health monitoring using discrete optimization

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Büyüköztürk, Oral

    2015-12-01

    The objective of optimal sensor placement (OSP) is to obtain a sensor layout that gives as much information of the dynamic system as possible in structural health monitoring (SHM). The process of OSP can be formulated as a discrete minimization (or maximization) problem with the sensor locations as the design variables, conditional on the constraint of a given sensor number. In this paper, we propose a discrete optimization scheme based on the artificial bee colony algorithm to solve the OSP problem after first transforming it into an integer optimization problem. A modal assurance criterion-oriented objective function is investigated to measure the utility of a sensor configuration in the optimization process based on the modal characteristics of a reduced order model. The reduced order model is obtained using an iterated improved reduced system technique. The constraint is handled by a penalty term added to the objective function. Three examples, including a 27 bar truss bridge, a 21-storey building at the MIT campus and the 610 m high Canton Tower, are investigated to test the applicability of the proposed algorithm to OSP. In addition, the proposed OSP algorithm is experimentally validated on a physical laboratory structure which is a three-story two-bay steel frame instrumented with triaxial accelerometers. Results indicate that the proposed method is efficient and can be potentially used in OSP in practical SHM.

  12. Interaction prediction optimization in multidisciplinary design optimization problems.

    PubMed

    Meng, Debiao; Zhang, Xiaoling; Huang, Hong-Zhong; Wang, Zhonglai; Xu, Huanwei

    2014-01-01

    The distributed strategy of Collaborative Optimization (CO) is suitable for large-scale engineering systems. However, it is hard for CO to converge when there is a high level coupled dimension. Furthermore, the discipline objectives cannot be considered in each discipline optimization problem. In this paper, one large-scale systems control strategy, the interaction prediction method (IPM), is introduced to enhance CO. IPM is utilized for controlling subsystems and coordinating the produce process in large-scale systems originally. We combine the strategy of IPM with CO and propose the Interaction Prediction Optimization (IPO) method to solve MDO problems. As a hierarchical strategy, there are a system level and a subsystem level in IPO. The interaction design variables (including shared design variables and linking design variables) are operated at the system level and assigned to the subsystem level as design parameters. Each discipline objective is considered and optimized at the subsystem level simultaneously. The values of design variables are transported between system level and subsystem level. The compatibility constraints are replaced with the enhanced compatibility constraints to reduce the dimension of design variables in compatibility constraints. Two examples are presented to show the potential application of IPO for MDO.

  13. Optimal flow for brown trout: Habitat - prey optimization.

    PubMed

    Fornaroli, Riccardo; Cabrini, Riccardo; Sartori, Laura; Marazzi, Francesca; Canobbio, Sergio; Mezzanotte, Valeria

    2016-10-01

    The correct definition of ecosystem needs is essential in order to guide policy and management strategies to optimize the increasing use of freshwater by human activities. Commonly, the assessment of the optimal or minimum flow rates needed to preserve ecosystem functionality has been done by habitat-based models that define a relationship between in-stream flow and habitat availability for various species of fish. We propose a new approach for the identification of optimal flows using the limiting factor approach and the evaluation of basic ecological relationships, considering the appropriate spatial scale for different organisms. We developed density-environment relationships for three different life stages of brown trout that show the limiting effects of hydromorphological variables at habitat scale. In our analyses, we found that the factors limiting the densities of trout were water velocity, substrate characteristics and refugia availability. For all the life stages, the selected models considered simultaneously two variables and implied that higher velocities provided a less suitable habitat, regardless of other physical characteristics and with different patterns. We used these relationships within habitat based models in order to select a range of flows that preserve most of the physical habitat for all the life stages. We also estimated the effect of varying discharge flows on macroinvertebrate biomass and used the obtained results to identify an optimal flow maximizing habitat and prey availability.

  14. Fuzzy resource optimization for safeguards

    SciTech Connect

    Zardecki, A.; Markin, J.T.

    1991-01-01

    Authorization, enforcement, and verification -- three key functions of safeguards systems -- form the basis of a hierarchical description of the system risk. When formulated in terms of linguistic rather than numeric attributes, the risk can be computed through an algorithm based on the notion of fuzzy sets. Similarly, this formulation allows one to analyze the optimal resource allocation by maximizing the overall detection probability, regarded as a linguistic variable. After summarizing the necessary elements of the fuzzy sets theory, we outline the basic algorithm. This is followed by a sample computation of the fuzzy optimization. 10 refs., 1 tab.

  15. Thermodynamic Metrics and Optimal Paths

    SciTech Connect

    Sivak, David; Crooks, Gavin

    2012-05-08

    A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.

  16. An optimal repartitioning decision policy

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Reynolds, P. F., Jr.

    1986-01-01

    A central problem to parallel processing is the determination of an effective partitioning of workload to processors. The effectiveness of any given partition is dependent on the stochastic nature of the workload. The problem of determining when and if the stochastic behavior of the workload has changed enough to warrant the calculation of a new partition is treated. The problem is modeled as a Markov decision process, and an optimal decision policy is derived. Quantification of this policy is usually intractable. A heuristic policy which performs nearly optimally is investigated empirically. The results suggest that the detection of change is the predominant issue in this problem.

  17. Optimizing outcomes in bunion surgery.

    PubMed

    Haas, Zachary M

    2009-07-01

    The goal of fine-tuning bunion surgery is to optimize outcomes and prevent complications. This is accomplished through restoring anatomic alignment, imparting first ray stability, meticulous surgical technique, and accounting for other causes that may contribute to first ray instability. Despite various soft tissue and osseous surgical procedures along with anatomic variations of each patient, the principles of anatomic restoration and stability remain consistent. Maintenance of correction is predicated on the treatment of underlying pathology and the establishment of optimal stability and first ray alignment.

  18. Temperature optimization for superconducting cavities

    SciTech Connect

    Rode, Claus

    1999-06-01

    Since our previous analysis of optimized operating temperature of superconducting cavities in an accelerator a decade ago, significant additional information has been discovered about SRF cavities. The most significant is the Q0 (quality factor) shift across the Lambda line at higher gradients as a result of a slope in Q0 vs. Eacc above Lambda. This is a result of the changing heat conduction conditions. We discuss temperature optimizations as a function of gradient and frequency. The refrigeration hardware impacts and changes in cycle efficiency are presented.

  19. Computational optimization and biological evolution.

    PubMed

    Goryanin, Igor

    2010-10-01

    Modelling and optimization principles become a key concept in many biological areas, especially in biochemistry. Definitions of objective function, fitness and co-evolution, although they differ between biology and mathematics, are similar in a general sense. Although successful in fitting models to experimental data, and some biochemical predictions, optimization and evolutionary computations should be developed further to make more accurate real-life predictions, and deal not only with one organism in isolation, but also with communities of symbiotic and competing organisms. One of the future goals will be to explain and predict evolution not only for organisms in shake flasks or fermenters, but for real competitive multispecies environments.

  20. The Sequential Parameter Optimization Toolbox

    NASA Astrophysics Data System (ADS)

    Bartz-Beielstein, Thomas; Lasarczyk, Christian; Preuss, Mike

    The sequential parameter optimization toolbox (SPOT) is one possible implementation of the SPO framework introduced in Chap. 2. It has been successfully applied to numerous heuristics for practical and theoretical optimization problems. We describe the mechanics and interfaces employed by SPOT to enable users to plug in their own algorithms. Furthermore, two case studies are presented to demonstrate how SPOT can be applied in practice, followed by a discussion of alternative metamodels to be plugged into it.We conclude with some general guidelines.

  1. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  2. Design optimization of transonic airfoils

    NASA Technical Reports Server (NTRS)

    Joh, C.-Y.; Grossman, B.; Haftka, R. T.

    1991-01-01

    Numerical optimization procedures were considered for the design of airfoils in transonic flow based on the transonic small disturbance (TSD) and Euler equations. A sequential approximation optimization technique was implemented with an accurate approximation of the wave drag based on the Nixon's coordinate straining approach. A modification of the Euler surface boundary conditions was implemented in order to efficiently compute design sensitivities without remeshing the grid. Two effective design procedures producing converged designs in approximately 10 global iterations were developed: interchanging the role of the objective function and constraint and the direct lift maximization with move limits which were fixed absolute values of the design variables.

  3. Optimal shapes for self-propelled swimmers

    NASA Astrophysics Data System (ADS)

    Koumoutsakos, Petros; van Rees, Wim; Gazzola, Mattia

    2011-11-01

    We optimize swimming shapes of three-dimensional self-propelled swimmers by combining the CMA- Evolution Strategy with a remeshed vortex method. We analyze the robustness of optimal shapes and discuss the near wake vortex dynamics for optimal speed and efficiency at Re=550. We also report preliminary results of optimal shapes and arrangements for multiple coordinated swimmers.

  4. Research on optimization-based design

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Parkinson, A. R.; Free, J. C.

    1989-01-01

    Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.

  5. Enhancing Polyhedral Relaxations for Global Optimization

    ERIC Educational Resources Information Center

    Bao, Xiaowei

    2009-01-01

    During the last decade, global optimization has attracted a lot of attention due to the increased practical need for obtaining global solutions and the success in solving many global optimization problems that were previously considered intractable. In general, the central question of global optimization is to find an optimal solution to a given…

  6. Multiobjective Collaborative Optimization of Systems of Systems

    DTIC Science & Technology

    2005-06-01

    field of economics where the best decision simultaneously optimizes several criteria. An economist, Vilfredo Pareto , in 1906 described the best...represent the Pareto -optimal set, named after Vilfredo Pareto . The Pareto - optimal set also defines a curve, called the Pareto -Optimal Frontier (POF...67 FuzzY PARETO FRONTS

  7. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the

  8. Implementation of the Altair optimization processes

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm J.; Véran, Jean-Pierre

    2003-02-01

    Altair is the adaptive optics system developed by NRC Canada for the Gemini North Telescope. Altair uses modal control and a quad-cell based Shack-Hartmann wavefront sensor. In order for Altair to adapt to changes in the observing conditions, two optimizers are activated when the AO loop is closed. These optimizers are the modal gain optimizer (MGO) and the centroid gain optimizer (CGO). This paper discusses the implementation and timing results of these optimizers.

  9. Critical Pedagogy for Transformative Optimism

    ERIC Educational Resources Information Center

    Mayo, Peter

    2006-01-01

    This essay critically highlights the main features of a study that attaches importance to the concepts of time and optimism and their effects on the achievement and goals of high and low achievers in a North American and a Brazilian context. The focus on the time factor that serves as a leitmotif throughout the study gives this work its…

  10. Understanding Optimal Decision-Making

    DTIC Science & Technology

    2015-06-01

    decision- making. 14. SUBJECT TERMS optimal decision-making, regret, Iowa gambling task, exponentially weighted moving average, change point...Iowa Gambling Task ......................................................... 3 2. Convoy Task...81 ix LIST OF FIGURES Figure 1. The Iowa Gambling Task screenshot (from Sacchi, 2014

  11. Optimizing use of library technology.

    PubMed

    Wink, Diane M; Killingsworth, Elizabeth K

    2011-01-01

    In this bimonthly series, the author examines how nurse educators can use the Internet and Web-based computer technologies such as search, communication, collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes optimizing the use of library technology.

  12. Singularities in Optimal Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  13. Singularities in optimal structural design

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1992-01-01

    Singularity conditions that arise during structural optimization can seriously degrade the performance of the optimizer. The singularities are intrinsic to the formulation of the structural optimization problem and are not associated with the method of analysis. Certain conditions that give rise to singularities have been identified in earlier papers, encompassing the entire structure. Further examination revealed more complex sets of conditions in which singularities occur. Some of these singularities are local in nature, being associated with only a segment of the structure. Moreover, the likelihood that one of these local singularities may arise during an optimization procedure can be much greater than that of the global singularity identified earlier. Examples are provided of these additional forms of singularities. A framework is also given in which these singularities can be recognized. In particular, the singularities can be identified by examination of the stress displacement relations along with the compatibility conditions and/or the displacement stress relations derived in the integrated force method of structural analysis.

  14. Dual characterizations of optimal systems.

    NASA Technical Reports Server (NTRS)

    Chan, W. L.; Leininger, G. G.

    1972-01-01

    The complementary variational principle developed in a Hilbert space setting provides a duality principle in the calculus of variations with dynamic constraints. This concept is adopted in this paper to investigate dual characterizations of optimal control systems. Systems under consideration include those with dynamics governed by linear ordinary differential equations, linear partial differential equations and non-linear ordinary differential equations.

  15. Unifying process control and optimization

    SciTech Connect

    Makansi, J.

    2005-09-01

    About 40% of US generation is now subject to wholesale competition. To intelligently bid into these new markets, real-time prices must be aligned with real-time costs. It is time to integrate the many advanced applications, sensors, and analyzers used for control, automation, and optimization into a system that reflects process and financial objectives. The paper reports several demonstration projects in the USA revealing what is being done in the area of advanced process optimization (by Alliant Energy, American Electric Power, PacifiCorp, Detroit Edison and Tennessee Valley Authority). In addition to these projects US DOE's NETL has funded the plant environment and cost optimization system, PECOS which combines physical models, neural networks and fuzzy logic control to provide operators with least cost setpoints for controllable variables. At Dynegy Inc's Baldwin station in Illinois the DOE is subsidizing a project where real time, closed-loop IT systems will optimize combustion, soot-blowing and SCR performance as well as unit thermal performance and plant economic performance. Commercial products such as Babcock and Wilcox's Flame Doctor, continuous emissions monitoring systems and various real-time predictive monitoring systems are also available. 4 figs.

  16. Pattern formations and optimal packing.

    PubMed

    Mityushev, Vladimir

    2016-04-01

    Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite.

  17. New Methods for Nonlinear Optimization.

    DTIC Science & Technology

    1988-05-11

    Gerald Shultz of Metropolitan State College in Denver in SIAM Journal on Numerical Analysis. The method has bccn, implemented with the aid of Emmanuel ...appear in Handbooks in Opera- tions Research and Management Science, Vol. 1, Optimization, G.L Nernhauser, A.H.G. Rinnooy Kan, and M.J. Todd , eds

  18. Optimization of space manufacturing systems

    NASA Technical Reports Server (NTRS)

    Akin, D. L.

    1979-01-01

    Four separate analyses are detailed: transportation to low earth orbit, orbit-to-orbit optimization, parametric analysis of SPS logistics based on earth and lunar source locations, and an overall program option optimization implemented with linear programming. It is found that smaller vehicles are favored for earth launch, with the current Space Shuttle being right at optimum payload size. Fully reusable launch vehicles represent a savings of 50% over the Space Shuttle; increased reliability with less maintenance could further double the savings. An optimization of orbit-to-orbit propulsion systems using lunar oxygen for propellants shows that ion propulsion is preferable by a 3:1 cost margin over a mass driver reaction engine at optimum values; however, ion engines cannot yet operate in the lower exhaust velocity range where the optimum lies, and total program costs between the two systems are ambiguous. Heavier payloads favor the use of a MDRE. A parametric model of a space manufacturing facility is proposed, and used to analyze recurring costs, total costs, and net present value discounted cash flows. Parameters studied include productivity, effects of discounting, materials source tradeoffs, economic viability of closed-cycle habitats, and effects of varying degrees of nonterrestrial SPS materials needed from earth. Finally, candidate optimal scenarios are chosen, and implemented in a linear program with external constraints in order to arrive at an optimum blend of SPS production strategies in order to maximize returns.

  19. A Parallel Particle Swarm Optimizer

    DTIC Science & Technology

    2003-01-01

    by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based...concurrent computation. The parallelization of the Particle Swarm Optimization (PSO) algorithm is detailed and its performance and characteristics demonstrated for the biomechanical system identification problem as example.

  20. Optimal Foraging in Semantic Memory

    ERIC Educational Resources Information Center

    Hills, Thomas T.; Jones, Michael N.; Todd, Peter M.

    2012-01-01

    Do humans search in memory using dynamic local-to-global search strategies similar to those that animals use to forage between patches in space? If so, do their dynamic memory search policies correspond to optimal foraging strategies seen for spatial foraging? Results from a number of fields suggest these possibilities, including the shared…

  1. Heat Sink Design and Optimization

    DTIC Science & Technology

    2015-12-01

    hot surfaces to cooler ambient air. Typically, the fins are oriented in a way to permit a natural convection air draft to flow upward through...main objective. Heat transfer from the heat sink consists of radiation and convection from both the intra-fin passages and the unshielded...Natural convection Radiation Design Modeling Optimization 16. SECURITY CLASSIFICATION OF: 17

  2. Optimal Preprocessing Of GPS Data

    NASA Technical Reports Server (NTRS)

    Wu, Sien-Chong; Melbourne, William G.

    1994-01-01

    Improved technique for preprocessing data from Global Positioning System (GPS) receiver reduces processing time and number of data to be stored. Technique optimal in sense it maintains strength of data. Also sometimes increases ability to resolve ambiguities in numbers of cycles of received GPS carrier signals.

  3. Optimal Preprocessing Of GPS Data

    NASA Technical Reports Server (NTRS)

    Wu, Sien-Chong; Melbourne, William G.

    1994-01-01

    Improved technique for preprocessing data from Global Positioning System receiver reduces processing time and number of data to be stored. Optimal in sense that it maintains strength of data. Also increases ability to resolve ambiguities in numbers of cycles of received GPS carrier signals.

  4. Optimal Admission to Higher Education

    ERIC Educational Resources Information Center

    Albaek, Karsten

    2017-01-01

    This paper analyses admission decisions when students from different high school tracks apply for admission to university programmes. I derive a criterion that is optimal in the sense that it maximizes the graduation rates of the university programmes. The paper contains an empirical analysis that documents the relevance of theory and illustrates…

  5. Optimization of Actuating Origami Networks

    NASA Astrophysics Data System (ADS)

    Buskohl, Philip; Fuchi, Kazuko; Bazzan, Giorgio; Joo, James; Gregory, Reich; Vaia, Richard

    2015-03-01

    Origami structures morph between 2D and 3D conformations along predetermined fold lines that efficiently program the form, function and mobility of the structure. By leveraging design concepts from action origami, a subset of origami art focused on kinematic mechanisms, reversible folding patterns for applications such as solar array packaging, tunable antennae, and deployable sensing platforms may be designed. However, the enormity of the design space and the need to identify the requisite actuation forces within the structure places a severe limitation on design strategies based on intuition and geometry alone. The present work proposes a topology optimization method, using truss and frame element analysis, to distribute foldline mechanical properties within a reference crease pattern. Known actuating patterns are placed within a reference grid and the optimizer adjusts the fold stiffness of the network to optimally connect them. Design objectives may include a target motion, stress level, or mechanical energy distribution. Results include the validation of known action origami structures and their optimal connectivity within a larger network. This design suite offers an important step toward systematic incorporation of origami design concepts into new, novel and reconfigurable engineering devices. This research is supported under the Air Force Office of Scientific Research (AFOSR) funding, LRIR 13RQ02COR.

  6. Soliton molecules: Experiments and optimization

    SciTech Connect

    Mitschke, Fedor

    2014-10-06

    Stable compound states of several fiber-optic solitons have recently been demonstrated. In the first experiment their shape was approximated, for want of a better description, by a sum of Gaussians. Here we discuss an optimization strategy which helps to find preferable shapes so that the generation of radiative background is reduced.

  7. Local optimization of energy systems

    SciTech Connect

    Lozano, M.A.; Valero, A.; Serra, L.

    1996-12-31

    Many thermal systems are very complex due to the number of components and/or its strong interdependence. This complexity makes difficult the optimization of the system design and operation. The theory of Exergetic Cost is based on concepts such as resources, structure, efficiency and purpose (belonging to any theory of production) and on the Second Law. This paper will show how it is possible to obtain from the theory of exergetic cost the marginal costs (Lagrange multipliers) of local resources being consumed by a component. This paper also shows the advantage of the proposed Theory of Perturbations when describing the complexity of structural interactions in a straightforward way. This theory allows to formulate simple procedures for local optimization of components in a plant. Finally, strategies for optimization of complex systems are shown. They are based in the sequential optimization from component to component. This clear and efficient method comes form the fact that the authors have now an operative application of the Thermoeconomic Isolation Principle. This is applied here to thermal power plants.

  8. Optimal Energy Management for Microgrids

    NASA Astrophysics Data System (ADS)

    Zhao, Zheng

    Microgrid is a recent novel concept in part of the development of smart grid. A microgrid is a low voltage and small scale network containing both distributed energy resources (DERs) and load demands. Clean energy is encouraged to be used in a microgrid for economic and sustainable reasons. A microgrid can have two operational modes, the stand-alone mode and grid-connected mode. In this research, a day-ahead optimal energy management for a microgrid under both operational modes is studied. The objective of the optimization model is to minimize fuel cost, improve energy utilization efficiency and reduce gas emissions by scheduling generations of DERs in each hour on the next day. Considering the dynamic performance of battery as Energy Storage System (ESS), the model is featured as a multi-objectives and multi-parametric programming constrained by dynamic programming, which is proposed to be solved by using the Advanced Dynamic Programming (ADP) method. Then, factors influencing the battery life are studied and included in the model in order to obtain an optimal usage pattern of battery and reduce the correlated cost. Moreover, since wind and solar generation is a stochastic process affected by weather changes, the proposed optimization model is performed hourly to track the weather changes. Simulation results are compared with the day-ahead energy management model. At last, conclusions are presented and future research in microgrid energy management is discussed.

  9. Optimization in Bilingual Language Use

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.

    2013-01-01

    Pieter Muysken's keynote paper, "Language contact outcomes as a result of bilingual optimization strategies", undertakes an ambitious project to theoretically unify different empirical outcomes of language contact, for instance, SLA, pidgins and Creoles, and code-switching. Muysken has dedicated a life-time to researching, rather…

  10. Shape Optimization of Swimming Sheets

    SciTech Connect

    Wilkening, J.; Hosoi, A.E.

    2005-03-01

    The swimming behavior of a flexible sheet which moves by propagating deformation waves along its body was first studied by G. I. Taylor in 1951. In addition to being of theoretical interest, this problem serves as a useful model of the locomotion of gastropods and various micro-organisms. Although the mechanics of swimming via wave propagation has been studied extensively, relatively little work has been done to define or describe optimal swimming by this mechanism.We carry out this objective for a sheet that is separated from a rigid substrate by a thin film of viscous Newtonian fluid. Using a lubrication approximation to model the dynamics, we derive the relevant Euler-Lagrange equations to optimize swimming speed and efficiency. The optimization equations are solved numerically using two different schemes: a limited memory BFGS method that uses cubic splines to represent the wave profile, and a multi-shooting Runge-Kutta approach that uses the Levenberg-Marquardt method to vary the parameters of the equations until the constraints are satisfied. The former approach is less efficient but generalizes nicely to the non-lubrication setting. For each optimization problem we obtain a one parameter family of solutions that becomes singular in a self-similar fashion as the parameter approaches a critical value. We explore the validity of the lubrication approximation near this singular limit by monitoring higher order corrections to the zeroth order theory and by comparing the results with finite element solutions of the full Stokes equations.

  11. Global optimization of digital circuits

    NASA Astrophysics Data System (ADS)

    Flandera, Richard

    1991-12-01

    This thesis was divided into two tasks. The first task involved developing a parser which could translate a behavioral specification in Very High-Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) into the format used by an existing digital circuit optimization tool, Boolean Reasoning In Scheme (BORIS). Since this tool is written in Scheme, a dialect of Lisp, the parser was also written in Scheme. The parser was implemented is Artez's modification of Earley's Algorithm. Additionally, a VHDL tokenizer was implemented in Scheme and a portion of the VHDL grammar was converted into the format which the parser uses. The second task was the incorporation of intermediate functions into BORIS. The existing BORIS contains a recursive optimization system that optimizes digital circuits by using circuit outputs as inputs into other circuits. Intermediate functions provide a greater selection of functions to be used as circuits inputs. Using both intermediate functions and output functions, the costs of the circuits in the test set were reduced by 43 percent. This is a 10 percent reduction when compared to the existing recursive optimization system. Incorporating intermediate functions into BORIS required the development of an intermediate-function generator and a set of control methods to keep the computation time from increasing exponentially.

  12. Optimal Experience of Web Activities.

    ERIC Educational Resources Information Center

    Chen, Hsiang; Wigand, R. T.; Nilan, M. S.

    1999-01-01

    Reports on Web users' optimal flow experiences to examine positive aspects of Web experiences that could be linked to theory applied to other media and then incorporated into Web design. Discusses the use of content-analytic procedures to analyze open-ended questionnaires that examined Web users' perceived flow experiences. (Author/LRW)

  13. Optimizing Requirements Decisions with KEYS

    NASA Technical Reports Server (NTRS)

    Jalali, Omid; Menzies, Tim; Feather, Martin

    2008-01-01

    Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.

  14. Wind Electrolysis: Hydrogen Cost Optimization

    SciTech Connect

    Saur, G.; Ramsden, T.

    2011-05-01

    This report describes a hydrogen production cost analysis of a collection of optimized central wind based water electrolysis production facilities. The basic modeled wind electrolysis facility includes a number of low temperature electrolyzers and a co-located wind farm encompassing a number of 3MW wind turbines that provide electricity for the electrolyzer units.

  15. Optimal control of native predators

    USGS Publications Warehouse

    Martin, Julien; O'Connell, Allan F.; Kendall, William L.; Runge, Michael C.; Simons, Theodore R.; Waldstein, Arielle H.; Schulte, Shiloh A.; Converse, Sarah J.; Smith, Graham W.; Pinion, Timothy; Rikard, Michael; Zipkin, Elise F.

    2010-01-01

    We apply decision theory in a structured decision-making framework to evaluate how control of raccoons (Procyon lotor), a native predator, can promote the conservation of a declining population of American Oystercatchers (Haematopus palliatus) on the Outer Banks of North Carolina. Our management objective was to maintain Oystercatcher productivity above a level deemed necessary for population recovery while minimizing raccoon removal. We evaluated several scenarios including no raccoon removal, and applied an adaptive optimization algorithm to account for parameter uncertainty. We show how adaptive optimization can be used to account for uncertainties about how raccoon control may affect Oystercatcher productivity. Adaptive management can reduce this type of uncertainty and is particularly well suited for addressing controversial management issues such as native predator control. The case study also offers several insights that may be relevant to the optimal control of other native predators. First, we found that stage-specific removal policies (e.g., yearling versus adult raccoon removals) were most efficient if the reproductive values among stage classes were very different. Second, we found that the optimal control of raccoons would result in higher Oystercatcher productivity than the minimum levels recommended for this species. Third, we found that removing more raccoons initially minimized the total number of removals necessary to meet long term management objectives. Finally, if for logistical reasons managers cannot sustain a removal program by removing a minimum number of raccoons annually, managers may run the risk of creating an ecological trap for Oystercatchers.

  16. Optimal deployment of missile interceptors

    SciTech Connect

    Bohachevsky, I.O.; Johnson, M.E.; Stein, M.L.

    1987-03-01

    Ballistic missile defenses composed of one- and two layers of interceptors that protect multiple assets from attacks by several types of warheads are modeled mathematically. Investigated are the most effective divisions of resources between midcourse and terminal defenses and the optimal deployments of terminal interceptors.

  17. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1973-01-01

    A collection of typical three-body trajectories from the L1 libration point on the sun-earth line to the earth is presented. These trajectories in the sun-earth system are grouped into four distinct families which differ in transfer time and delta V requirements. Curves showing the variations of delta V with respect to transfer time, and typical two and three-impulse primer vector histories, are included. The development of a four-body trajectory optimization program to compute fuel optimal trajectories between the earth and a point in the sun-earth-moon system are also discussed. Methods for generating fuel optimal two-impulse trajectories which originate at the earth or a point in space, and fuel optimal three-impulse trajectories between two points in space, are presented. A brief qualitative comparison of these methods is given. An example of a four-body two-impulse transfer from the Li libration point to the earth is included.

  18. Cochlear implant optimized noise reduction.

    PubMed

    Mauger, Stefan J; Arora, Komal; Dawson, Pam W

    2012-12-01

    Noise-reduction methods have provided significant improvements in speech perception for cochlear implant recipients, where only quality improvements have been found in hearing aid recipients. Recent psychoacoustic studies have suggested changes to noise-reduction techniques specifically for cochlear implants, due to differences between hearing aid recipient and cochlear implant recipient hearing. An optimized noise-reduction method was developed with significantly increased temporal smoothing of the signal-to-noise ratio estimate and a more aggressive gain function compared to current noise-reduction methods. This optimized noise-reduction algorithm was tested with 12 cochlear implant recipients over four test sessions. Speech perception was assessed through speech in noise tests with three noise types; speech-weighted noise, 20-talker babble and 4-talker babble. A significant speech perception improvement using optimized noise reduction over standard processing was found in babble noise and speech-weighted noise and over a current noise-reduction method in speech-weighted noise. Speech perception in quiet was not degraded. Listening quality testing for noise annoyance and overall preference found significant improvements over the standard processing and over a current noise-reduction method in speech-weighted and babble noise types. This optimized method has shown significant speech perception and quality improvements compared to the standard processing and a current noise-reduction method.

  19. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  20. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  1. Multicriteria optimization informed VMAT planning

    SciTech Connect

    Chen, Huixiao; Craft, David L.; Gierga, David P.

    2014-04-01

    We developed a patient-specific volumetric-modulated arc therapy (VMAT) optimization procedure using dose-volume histogram (DVH) information from multicriteria optimization (MCO) of intensity-modulated radiotherapy (IMRT) plans. The study included 10 patients with prostate cancer undergoing standard fractionation treatment, 10 patients with prostate cancer undergoing hypofractionation treatment, and 5 patients with head/neck cancer. MCO-IMRT plans using 20 and 7 treatment fields were generated for each patient on the RayStation treatment planning system (clinical version 2.5, RaySearch Laboratories, Stockholm, Sweden). The resulting DVH of the 20-field MCO-IMRT plan for each patient was used as the reference DVH, and the extracted point values of the resulting DVH of the MCO-IMRT plan were used as objectives and constraints for VMAT optimization. Weights of objectives or constraints of VMAT optimization or both were further tuned to generate the best match with the reference DVH of the MCO-IMRT plan. The final optimal VMAT plan quality was evaluated by comparison with MCO-IMRT plans based on homogeneity index, conformity number of planning target volume, and organ at risk sparing. The influence of gantry spacing, arc number, and delivery time on VMAT plan quality for different tumor sites was also evaluated. The resulting VMAT plan quality essentially matched the 20-field MCO-IMRT plan but with a shorter delivery time and less monitor units. VMAT plan quality of head/neck cancer cases improved using dual arcs whereas prostate cases did not. VMAT plan quality was improved by fine gantry spacing of 2 for the head/neck cancer cases and the hypofractionation-treated prostate cancer cases but not for the standard fractionation–treated prostate cancer cases. MCO-informed VMAT optimization is a useful and valuable way to generate patient-specific optimal VMAT plans, though modification of the weights of objectives or constraints extracted from resulting DVH of MCO

  2. Optimal control of hydroelectric facilities

    NASA Astrophysics Data System (ADS)

    Zhao, Guangzhi

    This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the

  3. Optimality for set-valued optimization in the sense of vector and set criteria.

    PubMed

    Kong, Xiangyu; Yu, GuoLin; Liu, Wei

    2017-01-01

    The vector criterion and set criterion are two defining approaches of solutions for the set-valued optimization problems. In this paper, the optimality conditions of both criteria of solutions are established for the set-valued optimization problems. By using Studniarski derivatives, the necessary and sufficient optimality conditions are derived in the sense of vector and set optimization.

  4. Dynamic optimization identifies optimal programmes for pathway regulation in prokaryotes.

    PubMed

    Bartl, Martin; Kötzing, Martin; Schuster, Stefan; Li, Pu; Kaleta, Christoph

    2013-01-01

    To survive in fluctuating environmental conditions, microorganisms must be able to quickly react to environmental challenges by upregulating the expression of genes encoding metabolic pathways. Here we show that protein abundance and protein synthesis capacity are key factors that determine the optimal strategy for the activation of a metabolic pathway. If protein abundance relative to protein synthesis capacity increases, the strategies shift from the simultaneous activation of all enzymes to the sequential activation of groups of enzymes and finally to a sequential activation of individual enzymes along the pathway. In the case of pathways with large differences in protein abundance, even more complex pathway activation strategies with a delayed activation of low abundance enzymes and an accelerated activation of high abundance enzymes are optimal. We confirm the existence of these pathway activation strategies as well as their dependence on our proposed constraints for a large number of metabolic pathways in several hundred prokaryotes.

  5. Optimal web investment in sub-optimal foraging conditions

    NASA Astrophysics Data System (ADS)

    Harmer, Aaron M. T.; Kokko, Hanna; Herberstein, Marie E.; Madin, Joshua S.

    2012-01-01

    Orb web spiders sit at the centre of their approximately circular webs when waiting for prey and so face many of the same challenges as central-place foragers. Prey value decreases with distance from the hub as a function of prey escape time. The further from the hub that prey are intercepted, the longer it takes a spider to reach them and the greater chance they have of escaping. Several species of orb web spiders build vertically elongated ladder-like orb webs against tree trunks, rather than circular orb webs in the open. As ladder web spiders invest disproportionately more web area further from the hub, it is expected they will experience reduced prey gain per unit area of web investment compared to spiders that build circular webs. We developed a model to investigate how building webs in the space-limited microhabitat on tree trunks influences the optimal size, shape and net prey gain of arboricolous ladder webs. The model suggests that as horizontal space becomes more limited, optimal web shape becomes more elongated, and optimal web area decreases. This change in web geometry results in decreased net prey gain compared to webs built without space constraints. However, when space is limited, spiders can achieve higher net prey gain compared to building typical circular webs in the same limited space. Our model shows how spiders optimise web investment in sub-optimal conditions and can be used to understand foraging investment trade-offs in other central-place foragers faced with constrained foraging arenas.

  6. Modal test optimization using VETO (Virtual Environment for Test Optimization)

    SciTech Connect

    Klenke, S.E.; Reese, G.M.; Schoof, L.A.; Shierling, C.L.

    1995-12-01

    We present a software environment integrating analysis and test based models to support optimal modal test design through a Virtual Environment for Test Optimization (VETO). The VETO assists analysis and test engineers in maximizing the value of each modal test. It is particularly advantageous for structural dynamics model reconciliation applications. The VETO enables an engineer to interact with a finite element model of a test object to optimally place sensors and exciters and to investigate the selection of-data acquisition parameters needed to conduct a complete modal survey. Additionally, the user can evaluate the use of different types of instrumentation such as filters, amplifiers and transducers for which models are available in the VETO. The dynamic response of most of the virtual instruments (including the device under test) are modeled in the state space domain. Design of modal excitation levels and appropriate test instrumentation are facilitated by the VETO`s ability to simulate such features as unmeasured external inputs, A/D quantization effects, and electronic noise. Measures of the quality of the experimental design, including the Modal Assurance Criterion, and the Normal Mode indicator Function are available. The VETO also integrates tools such as Effective Independence and minamac to assist in selection of optimal sensor locations. The software is designed about three distinct modules: (1) a main controller and GUI written in C++, (2) a visualization model, taken from FEAVR, running under AVS, and (3) a state space model and time integration module, built in SIMULINK. These modules are designed to run as separate processes on interconnected machines. MATLAB`s external interface library is used to provide transparent, bidirectional communication between the controlling program and the computational engine where all the time integration is performed.

  7. Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, Brian

    2013-01-01

    The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.

  8. A novel bee swarm optimization algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Akbari, Reza; Mohammadi, Alireza; Ziarati, Koorush

    2010-10-01

    The optimization algorithms which are inspired from intelligent behavior of honey bees are among the most recently introduced population based techniques. In this paper, a novel algorithm called bee swarm optimization, or BSO, and its two extensions for improving its performance are presented. The BSO is a population based optimization technique which is inspired from foraging behavior of honey bees. The proposed approach provides different patterns which are used by the bees to adjust their flying trajectories. As the first extension, the BSO algorithm introduces different approaches such as repulsion factor and penalizing fitness (RP) to mitigate the stagnation problem. Second, to maintain efficiently the balance between exploration and exploitation, time-varying weights (TVW) are introduced into the BSO algorithm. The proposed algorithm (BSO) and its two extensions (BSO-RP and BSO-RPTVW) are compared with existing algorithms which are based on intelligent behavior of honey bees, on a set of well known numerical test functions. The experimental results show that the BSO algorithms are effective and robust; produce excellent results, and outperform other algorithms investigated in this consideration.

  9. Optimization Algorithms in Optimal Predictions of Atomistic Properties by Kriging.

    PubMed

    Di Pasquale, Nicodemo; Davie, Stuart J; Popelier, Paul L A

    2016-04-12

    The machine learning method kriging is an attractive tool to construct next-generation force fields. Kriging can accurately predict atomistic properties, which involves optimization of the so-called concentrated log-likelihood function (i.e., fitness function). The difficulty of this optimization problem quickly escalates in response to an increase in either the number of dimensions of the system considered or the size of the training set. In this article, we demonstrate and compare the use of two search algorithms, namely, particle swarm optimization (PSO) and differential evolution (DE), to rapidly obtain the maximum of this fitness function. The ability of these two algorithms to find a stationary point is assessed by using the first derivative of the fitness function. Finally, the converged position obtained by PSO and DE is refined through the limited-memory Broyden-Fletcher-Goldfarb-Shanno bounded (L-BFGS-B) algorithm, which belongs to the class of quasi-Newton algorithms. We show that both PSO and DE are able to come close to the stationary point, even in high-dimensional problems. They do so in a reasonable amount of time, compared to that with the Newton and quasi-Newton algorithms, regardless of the starting position in the search space of kriging hyperparameters. The refinement through L-BFGS-B is able to give the position of the maximum with whichever precision is desired.

  10. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization.

    PubMed

    Deufel, Christopher L; Furutani, Keith M

    2014-02-07

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.

  11. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    NASA Astrophysics Data System (ADS)

    Deufel, Christopher L.; Furutani, Keith M.

    2014-02-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.

  12. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The paper describes the optimization formulation in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor aerodynamic performance optimization for minimum hover horsepower, rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  13. Chopped random-basis quantum optimization

    SciTech Connect

    Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone

    2011-08-15

    In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.

  14. [Optimization of radiological scoliosis assessment].

    PubMed

    Enríquez, Goya; Piqueras, Joaquim; Catalá, Ana; Oliva, Glòria; Ruiz, Agustí; Ribas, Montserrat; Duran, Carmina; Rodrigo, Carlos; Rodríguez, Eugenia; Garriga, Victoria; Maristany, Teresa; García-Fontecha, César; Baños, Joan; Muchart, Jordi; Alava, Fernando

    2014-07-01

    Most scoliosis are idiopathic (80%) and occur more frequently in adolescent girls. Plain radiography is the imaging method of choice, both for the initial study and follow-up studies but has the disadvantage of using ionizing radiation. The breasts are exposed to x-ray along these repeated examinations. The authors present a range of recommendations in order to optimize radiographic exam technique for both conventional and digital x-ray settings to prevent unnecessary patients' radiation exposure and to reduce the risk of breast cancer in patients with scoliosis. With analogue systems, leaded breast protectors should always be used, and with any radiographic equipment, analog or digital radiography, the examination should be performed in postero-anterior projection and optimized low-dose techniques. The ALARA (as low as reasonable achievable) rule should always be followed to achieve diagnostic quality images with the lowest feasible dose.

  15. Optimal design of airlift fermenters

    SciTech Connect

    Moresi, M.

    1981-11-01

    In this article a modeling of a draft-tube airlift fermenter (ALF) based on perfect back-mixing of liquid and plugflow for gas bubbles has been carried out to optimize the design and operation of fermentation units at different working capacities. With reference to a whey fermentation by yeasts the economic optimization has led to a slim ALF with an aspect ratio of about 15. As far as power expended per unit of oxygen transfer is concerned, the responses of the model are highly influenced by kLa. However, a safer use of the model has been suggested in order to assess the feasibility of the fermentation process under study. (Refs. 39).

  16. Turbulent optimization of toroidal configurations

    NASA Astrophysics Data System (ADS)

    Mynick, H.; Xanthopoulos, P.; Faber, B.; Lucia, M.; Rorvig, M.; Talmadge, J. N.

    2014-09-01

    Recent progress in ‘turbulent optimization’ of toroidal configurations is described, using a method recently developed for evolving such configurations to ones having reduced turbulent transport. The method uses the GENE gyrokinetic code to compute the radial heat flux Qgk, and the STELLOPT optimization code with a theory-based ‘proxy’ figure of merit Qpr to stand in for Qgk for computational speed. Improved expressions for Qpr have been developed, involving further geometric quantities beyond those in the original proxy, which can also be used as ‘control knobs’ to reduce Qgk. Use of a global search algorithm has led to the discovery of turbulent-optimized configurations not found by the standard, local algorithm usually employed, as has use of a mapping capability which STELLOPT has been extended to provide, of figures of merit over the search space.

  17. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  18. Structural optimization - Challenges and opportunities

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    A review of developments in structural optimization techniques and their interface with growing computer capabilities is presented. Structural design steps comprise functional definition of an object, an evaluation phase wherein external influences are quantified, selection of the design concept, material, object geometry, and the internal layout, and quantification of the physical characteristics. Optimization of a fully stressed design is facilitated by use of nonlinear mathematical programming which permits automated definition of the physics of a problem. Design iterations terminate when convergence is acquired between mathematical and physical criteria. A constrained minimum algorithm has been formulated using an Augmented Lagrangian approach and a generalized reduced gradient to obtain fast convergence. Various approximation techniques are mentioned. The synergistic application of all the methods surveyed requires multidisciplinary teamwork during a design effort.

  19. Process optimization in optical fabrication

    NASA Astrophysics Data System (ADS)

    Faehnle, Oliver

    2016-03-01

    Predictable and stable fabrication processes are essential for reliable cost and quality management in optical fabrication technology. This paper reports on strategies to generate and control optimum sets of process parameters for, e.g., subaperture polishing of small optics (featuring clear apertures smaller than 2 mm). Emphasis is placed on distinguishing between machine and process optimization, demonstrating that it is possible to set up the ductile mode grinding process by means other than controlling critical depth of cut. Finally, a recently developed in situ testing technique is applied to monitor surface quality on-machine while abrasively working the surface under test enabling an online optimization of polishing processes eventually minimizing polishing time and fabrication cost.

  20. Optimal transport and the placenta

    SciTech Connect

    Morgan, Simon; Xia, Qinglan; Salafia, Carolym

    2010-01-01

    The goal of this paper is to investigate the expected effects of (i) placental size, (ii) placental shape and (iii) the position of insertion of the umbilical cord on the work done by the foetus heart in pumping blood across the placenta. We use optimal transport theory and modeling to quantify the expected effects of these factors . Total transport cost and the shape factor contribution to cost are given by the optimal transport model. Total placental transport cost is highly correlated with birth weight, placenta weight, FPR and the metabolic scaling factor beta. The shape factor is also highly correlated with birth weight, and after adjustment for placental weight, is highly correlated with the metabolic scaling factor beta.

  1. Optimal designs for copula models

    PubMed Central

    Perrone, E.; Müller, W.G.

    2016-01-01

    Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616

  2. Integrated Energy System Dispatch Optimization

    SciTech Connect

    Firestone, Ryan; Stadler, Michael; Marnay, Chris

    2006-06-16

    On-site cogeneration of heat and electricity, thermal and electrical storage, and curtailing/rescheduling demand options are often cost-effective to commercial and industrial sites. This collection of equipment and responsive consumption can be viewed as an integrated energy system(IES). The IES can best meet the sites cost or environmental objectives when controlled in a coordinated manner. However, continuously determining this optimal IES dispatch is beyond the expectations for operators of smaller systems. A new algorithm is proposed in this paper to approximately solve the real-time dispatch optimization problem for a generic IES containing an on-site cogeneration system subject to random outages, limited curtailment opportunities, an intermittent renewable electricity source, and thermal storage. An example demonstrates how this algorithm can be used in simulation to estimate the value of IES components.

  3. Optimal intervention strategies for tuberculosis

    NASA Astrophysics Data System (ADS)

    Bowong, Samuel; Aziz Alaoui, A. M.

    2013-06-01

    This paper deals with the problem of optimal control of a deterministic model of tuberculosis (abbreviated as TB for tubercle bacillus). We first present and analyze an uncontrolled tuberculosis model which incorporates the essential biological and epidemiological features of the disease. The model is shown to exhibit the phenomenon of backward bifurcation, where a stable disease-free equilibrium co-exists with one or more stable endemic equilibria when the associated basic reproduction number is less than the unity. Based on this continuous model, the tuberculosis control is formulated and solved as an optimal control problem, indicating how control terms on the chemoprophylaxis and detection should be introduced in the population to reduce the number of individuals with active TB. Results provide a framework for designing the cost-effective strategies for TB with two intervention methods.

  4. What May Visualization Processes Optimize?

    PubMed

    Chen, Min; Golan, Amos

    2016-12-01

    In this paper, we present an abstract model of visualization and inference processes, and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.

  5. Optimality Principles of Undulatory Swimming

    NASA Astrophysics Data System (ADS)

    Nangia, Nishant; Bale, Rahul; Patankar, Neelesh

    2015-11-01

    A number of dimensionless quantities derived from a fish's kinematic and morphological parameters have been used to describe the hydrodynamics of swimming. In particular, body/caudal fin swimmers have been found to swim within a relatively narrow range of these quantities in nature, e.g., Strouhal number or the optimal specific wavelength. It has been hypothesized or shown that these constraints arise due to maximization of swimming speed, efficiency, or cost of transport in certain domains of this large dimensionless parameter space. Using fully resolved simulations of undulatory patterns, we investigate the existence of various optimality principles in fish swimming. Using scaling arguments, we relate various dimensionless parameters to each other. Based on these findings, we make design recommendations on how kinematic parameters for a swimming robot or vehicle should be chosen. This work is supported by NSF Grants CBET-0828749, CMMI-0941674, CBET-1066575 and the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.

  6. Gain optimization with nonlinear controls

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Kandadai, R. D.

    1982-01-01

    An algorithm has been developed for the analysis and design of controls for nonlinear systems. The technical approach is to use statistical linearization to model the nonlinear dynamics of a system. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this report is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general however and numerical computation requires only that the specific nonlinearity be considered in the analysis.

  7. Safety optimization through risk management

    NASA Astrophysics Data System (ADS)

    Wright, K.; Peltonen, P.

    The paper discusses the overall process of system safety optimization in the space program environment and addresses in particular methods that enhance the efficiency of this activity. Effective system safety optimization is achieved by concentrating the available engineering and safety assurance resouces on the main risk contributors. The qualitative risk contributor identification by means of the hazard analyses and the FMECA constitute the basis for the system safety process. The risk contributors are ranked firstly on a qualitative basis according to the consequence severities. This ranking is then refined by mishap propagation/recovery time considerations and by probabilistic means (PRA). Finally, in order to broaden and extend the use of risk contributor ranking as a managerial tool in project resource assignment, quality, manufacturing and operations related critical characteristics, i.e. risk influencing factors, are identified for managerial visibility.

  8. Sensor placement optimization in buildings

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Tisato, Francesco

    2012-01-01

    In this work we address the problem of optimal sensor placement for a given region and task. An important issue in designing sensor arrays is the appropriate placement of the sensors such that they achieve a predefined goal. There are many problems that could be considered in the placement of multiple sensors. In this work we focus on the four problems identified by Hörster and Lienhart. To solve these problems, we propose an algorithm based on Direct Search, which is able to approach the global optimal solution within reasonable time and memory consumption. The algorithm is experimentally evaluated and the results are presented on two real floorplans. The experimental results show that our DS algorithm is able to improve the results given by the most performing heuristic introduced in. The algorithm is then extended to work also on continuous solution spaces, and 3D problems.

  9. Optimal concentrations in nectar feeding

    PubMed Central

    Kim, Wonjung; Gilet, Tristan; Bush, John W. M.

    2011-01-01

    Nectar drinkers must feed quickly and efficiently due to the threat of predation. While the sweetest nectar offers the greatest energetic rewards, the sharp increase of viscosity with sugar concentration makes it the most difficult to transport. We here demonstrate that the sugar concentration that optimizes energy transport depends exclusively on the drinking technique employed. We identify three nectar drinking techniques: active suction, capillary suction, and viscous dipping. For each, we deduce the dependence of the volume intake rate on the nectar viscosity and thus infer an optimal sugar concentration consistent with laboratory measurements. Our results provide the first rationale for why suction feeders typically pollinate flowers with lower sugar concentration nectar than their counterparts that use viscous dipping. PMID:21949358

  10. Optimal mollifiers for spherical deconvolution

    NASA Astrophysics Data System (ADS)

    Hielscher, Ralf; Quellmalz, Michael

    2015-08-01

    This paper deals with the inversion of the spherical Funk-Radon transform, and, more generally, with the inversion of spherical convolution operators from the point of view of statistical inverse problems. This means we consider discrete data perturbed by white noise and aim at estimators with optimal mean square error for functions out of a Sobolev ball. To this end we analyze a specific class of estimators built upon the spherical hyperinterpolation operator, spherical designs and the mollifier approach. Eventually, we determine optimal mollifier functions with respect to the noise level, the number of data points and the smoothness of the original function. We complete this paper by providing a fast algorithm for the numerical computation of the estimator, which is based on the fast spherical Fourier transform, and by illustrating our theoretical results with numerical experiments.

  11. Route Optimization for Multiple Searchers

    DTIC Science & Technology

    2009-09-04

    observation leads to the first linearization of SP1. Model SP1-L: Indices As in SP1. i number of looks on a target path ( i = 0, 1, ..., JT ). Sets and...ω)Uω (20) s.t. e−iα(1 + i − ie−α) + 1 α e−iα(e−α − 1) ∑ c,t∈T ζc,t(ω)αZc,t ≤ Uω ∀ ω, i (21) 12 (15)− (19) SP1-L is a mixed-integer linear program...tolerances δ, δi ≥ 0, i = 0, 1, 2, .... 17 Step 0. Set the lower bound, ξ, on the optimal value of SP1 to 0; set the upper bound, ξ, on the optimal value of

  12. Search for the optimal diet.

    PubMed

    Mullin, Gerard E

    2010-12-01

    Since the beginning of time, we have been searching for diets that satisfy our palates while simultaneously optimizing health and well-being. Every year, there are hundreds of new diet books on the market that make a wide range of promises but rarely deliver. Unfortunately, consumers are gullible and believe much of the marketing hype because they are desperately seeking ways to maximize their health. As a result, they continue to purchase these diet books, sending many of them all the way to the bestseller list. Because many of these meal plans are not sustainable and are questionable in their approaches, the consumer is ultimately left to continue searching, only able to choose from the newest "fad" promoted by publicists rather than being grounded in science. Thus, the search for the optimal diet continues to be the "holy grail" for many of us today, presenting a challenge for nutritionists and practitioners to provide sound advice to consumers.

  13. Structural optimization: Challenges and opportunities

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1984-01-01

    A review of developments in structural optimization techniques and their interface with growing computer capabilities is presented. Structural design steps comprise functional definition of an object, an evaluation phase wherein external influences are quantified, selection of the design concept, material, object geometry, and the internal layout, and quantification of the physical characteristics. Optimization of a fully stressed design is facilitated by use of nonlinear mathematical programming which permits automated definition of the physics of a problem. Design iterations terminate when convergence is acquired between mathematical and physical criteria. A constrained minimum algorithm has been formulated using an Augmented Lagrangian approach and a generalized reduced gradient to obtain fast convergence. Various approximation techniques are mentioned. The synergistic application of all the methods surveyed requires multidisciplinary teamwork during a design effort.

  14. Optimizing Marine Security Guard Assignments

    DTIC Science & Technology

    2011-06-01

    Bangkok , Thailand East Asia and Pacific 18 4 Fort Lauderdale, Florida Western Hemisphere - South 13 5 Frankfurt, Germany Western Europe and Scandinavia 15...2008). Each 7 stationing plan satisfies a myriad of unit requirements, such as building and land availability. Similarly, each assignment solution...optimize the assignment of enlisted Marines to billets. EAM-GLOBAL seeks to assign the best Marine-billet fit while balancing staffing shortages

  15. Assessment of Optimal Interrogation Approaches

    DTIC Science & Technology

    2007-05-01

    4 ( 01-03-2006 Final March 2006 - May 2007 Assessment of Optimal Interrogation Approaches H9C101-6-0051... interrogator . Specifically, DACA wanted the researchers to gather information from "expert" interrogators (referred to as "superior" interrogators ...common approaches/techniques that are employed by the majority of interrogators . U U U U 129 David E. Smith (314) 209-9495 ext 701 Prepared for the

  16. Optimization of color LC displays

    NASA Astrophysics Data System (ADS)

    Kosmowski, Bogdan B.

    1995-08-01

    The advancement of the liquid crystal display (LCD) technology, and improvements of the optical and electro-optical properties, have enabled the broad expansion of LCDs application field. The rapid development of the multimedia techniques, new applications in automotive, office, medical domain, forced the demand for the color displays--for the information presentation with the color code. The necessity to fulfil many contradictory and extreme conditions caused the development of the optimization procedures of the color LC displays to be a big problem. Most of the LCDs used nowadays are the twisted nematic, super twisted nematic, and active matrix thin film transistor LCD. The characterization of the achromatic black/white LCDs is made by means of photometric measuring methods, and quantitative measures are used: luminance, reflectance, contrast, contrast ration; as a function of a driving voltage, viewing angle, temperature, etc. The characterization of the color LCD is based on the spectral distributions of the transmittance or reflectance. Quantitative measures are chromatic coordinates and luminance factors are defined according to the colorimetric systems--CIE 1931, CIE 1976, CIELUV, CIELAB. The color difference (Delta) E in the CIELUV system is applied as a optimization parameter for the color display module. The spectral properties of all optical elements of the display module are analyzed and their influence on the set of the optical factors of LCD is evaluated. The correlation between technological parameters and optical characteristics of the LCD has been investigated. The choice of the optimization criterion is discussed and the optimization algorithm is proposed. Results of the color displays evaluation for some examples with different preconditions are presented.

  17. Optimization of reinforced concrete slabs

    NASA Technical Reports Server (NTRS)

    Ferritto, J. M.

    1979-01-01

    Reinforced concrete cells composed of concrete slabs and used to limit the effects of accidental explosions during hazardous explosives operations are analyzed. An automated design procedure which considers the dynamic nonlinear behavior of the reinforced concrete of arbitrary geometrical and structural configuration subjected to dynamic pressure loading is discussed. The optimum design of the slab is examined using an interior penalty function. The optimization procedure is presented and the results are discussed and compared with finite element analysis.

  18. A Unified Approach to Optimization

    DTIC Science & Technology

    2014-10-02

    and dynamic programming, logic-based Benders decomposition, and unification of exact and heuristic methods. The publications associated with this...Logic-Based Benders Decomposition Logic-based Benders decomposition (LBBD) has been used for some years to combine CP and MIP, usually by solving the...classical Benders decomposition, but can be any optimization problem. Benders cuts are generated by solving the inference dual of the subproblem

  19. Optimization of Cylindrical Hall Thrusters

    SciTech Connect

    Yevgeny Raitses, Artem Smirnov, Erik Granstedt, and Nathaniel J. Fi

    2007-07-24

    The cylindrical Hall thruster features high ionization efficiency, quiet operation, and ion acceleration in a large volume-to-surface ratio channel with performance comparable with the state-of-the-art annular Hall thrusters. These characteristics were demonstrated in low and medium power ranges. Optimization of miniaturized cylindrical thrusters led to performance improvements in the 50-200W input power range, including plume narrowing, increased thruster efficiency, reliable discharge initiation, and stable operation. __________________________________________________

  20. Optimization of Cylindrical Hall Thrusters

    SciTech Connect

    Yevgeny Raitses, Artem Smirnov, Erik Granstedt, and Nathaniel J. Fisch

    2007-11-27

    The cylindrical Hall thruster features high ionization efficiency, quiet operation, and ion acceleration in a large volume-to-surface ratio channel with performance comparable with the state-of-the-art annular Hall thrusters. These characteristics were demonstrated in low and medium power ranges. Optimization of miniaturized cylindrical thrusters led to performance improvements in the 50-200W input power range, including plume narrowing, increased thruster efficiency, reliable discharge initiation, and stable operation.

  1. Optimizing Sustainable Geothermal Heat Extraction

    NASA Astrophysics Data System (ADS)

    Patel, Iti; Bielicki, Jeffrey; Buscheck, Thomas

    2016-04-01

    Geothermal heat, though renewable, can be depleted over time if the rate of heat extraction exceeds the natural rate of renewal. As such, the sustainability of a geothermal resource is typically viewed as preserving the energy of the reservoir by weighing heat extraction against renewability. But heat that is extracted from a geothermal reservoir is used to provide a service to society and an economic gain to the provider of that service. For heat extraction used for market commodities, sustainability entails balancing the rate at which the reservoir temperature renews with the rate at which heat is extracted and converted into economic profit. We present a model for managing geothermal resources that combines simulations of geothermal reservoir performance with natural resource economics in order to develop optimal heat mining strategies. Similar optimal control approaches have been developed for managing other renewable resources, like fisheries and forests. We used the Non-isothermal Unsaturated-saturated Flow and Transport (NUFT) model to simulate the performance of a sedimentary geothermal reservoir under a variety of geologic and operational situations. The results of NUFT are integrated into the optimization model to determine the extraction path over time that maximizes the net present profit given the performance of the geothermal resource. Results suggest that the discount rate that is used to calculate the net present value of economic gain is a major determinant of the optimal extraction path, particularly for shallower and cooler reservoirs, where the regeneration of energy due to the natural geothermal heat flux is a smaller percentage of the amount of energy that is extracted from the reservoir.

  2. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  3. Pharmacological optimization of tissue perfusion

    PubMed Central

    Mongardon, N.; Dyson, A.; Singer, M.

    2009-01-01

    After fluid resuscitation, vasoactive drug treatment represents the major cornerstone for correcting any major impairment of the circulation. However, debate still rages as to the choice of agent, dose, timing, targets, and monitoring modalities that should optimally be used to benefit the patient yet, at the same time, minimize harm. This review highlights these areas and some new pharmacological agents that broaden our therapeutic options. PMID:19460775

  4. Algorithms for optimal redundancy allocation

    SciTech Connect

    Vandenkieboom, J.; Youngblood, R.

    1993-01-01

    Heuristic and exact methods for solving the redundancy allocation problem are compared to an approach based on genetic algorithms. The various methods are applied to the bridge problem, which has been used as a benchmark in earlier work on optimization methods. Comparisons are presented in terms of the best configuration found by each method, and the computation effort which was necessary in order to find it.

  5. Optimal Search and Interdiction Planning

    DTIC Science & Technology

    2014-06-18

    use (United Nations Office on Drugs and Crime , 2012, p. 1) and over 1,500 metric tons of illegal drugs are seized in transit to United States annually...Naval Research, Mathe - matical Optimization and Operations Research Program. References Alpern, S., Gal, S., 2003. The Theory of Search Games and...and Crime , 2012. World Drug Report. United Nations, New York, NY. US Department of Justice, 2014. National Drug Threat Assessment. Retrieved June 3

  6. Response Surface Model Building and Multidisciplinary Optimization Using D-Optimal Designs

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Lepsch, Roger A.; McMillin, Mark L.

    1998-01-01

    This paper discusses response surface methods for approximation model building and multidisciplinary design optimization. The response surface methods discussed are central composite designs, Bayesian methods and D-optimal designs. An over-determined D-optimal design is applied to a configuration design and optimization study of a wing-body, launch vehicle. Results suggest that over determined D-optimal designs may provide an efficient approach for approximation model building and for multidisciplinary design optimization.

  7. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    NASA Astrophysics Data System (ADS)

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  8. Radiation Shielding Optimization on Mars

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Mertens, Chris J.; Blattnig, Steve R.

    2013-01-01

    Future space missions to Mars will require radiation shielding to be optimized for deep space transit and an extended stay on the surface. In deep space, increased shielding levels and material optimization will reduce the exposure from most solar particle events (SPE) but are less effective at shielding against galactic cosmic rays (GCR). On the surface, the shielding provided by the Martian atmosphere greatly reduces the exposure from most SPE, and long-term GCR exposure is a primary concern. Previous work has shown that in deep space, additional shielding of common materials such as aluminum or polyethylene does not significantly reduce the GCR exposure. In this work, it is shown that on the Martian surface, almost any amount of aluminum shielding increases exposure levels for humans. The increased exposure levels are attributed to neutron production in the shield and Martian regolith as well as the electromagnetic cascade induced in the Martian atmosphere. This result is significant for optimization of vehicle and shield designs intended for the surface of Mars.

  9. [Procedure optimization in hospital management].

    PubMed

    Bauer, M; Hanss, R; Schleppers, A; Steinfath, M; Tonner, P H; Martin, J

    2004-05-01

    Starting January 1st 2004 the German diagnosis-related group (DRG) system was established for in-patient cases. Consequently, the detection and realization of cost-saving potentials are becoming more and more important. For a successful future, efficient allocation of resources is essential. Economically, anaesthesia-related time delays during perioperative work-flow should be minimized. Since numerous entities contribute to perioperative care, it is extremely complex to analyze and optimize this process flow. In this publication single steps leading to an optimized perioperative process flow will be presented: documentation of predefined time points, calculation of relevant time intervals and analysis of key numbers for complex settings. Single steps of the given process analysis will be demonstrated using data from surgical patients at the University Hospital Schleswig-Holstein, Campus Kiel. The attached data collection sheets can be used by interested hospital departments and are meant to serve as a template for further process analyses. Based on the shown analysis, an example will be given to develop an optimized work-flow as a standard operating procedure (SOP). The implementation of the SOP module in an interdisciplinary clinical pathway (CP), which defines efficient medical care from admission to discharge, is mainly responsible for decreased process costs but increased quality of care.

  10. Optimal segmentation and packaging process

    DOEpatents

    Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.

    1999-08-10

    A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.

  11. Optimal Stopping with Information Constraint

    SciTech Connect

    Lempa, Jukka

    2012-10-15

    We study the optimal stopping problem proposed by Dupuis and Wang (Adv. Appl. Probab. 34:141-157, 2002). In this maximization problem of the expected present value of the exercise payoff, the underlying dynamics follow a linear diffusion. The decision maker is not allowed to stop at any time she chooses but rather on the jump times of an independent Poisson process. Dupuis and Wang (Adv. Appl. Probab. 34:141-157, 2002), solve this problem in the case where the underlying is a geometric Brownian motion and the payoff function is of American call option type. In the current study, we propose a mild set of conditions (covering the setup of Dupuis and Wang in Adv. Appl. Probab. 34:141-157, 2002) on both the underlying and the payoff and build and use a Markovian apparatus based on the Bellman principle of optimality to solve the problem under these conditions. We also discuss the interpretation of this model as optimal timing of an irreversible investment decision under an exogenous information constraint.

  12. Optimization of Micromachined Photon Devices

    SciTech Connect

    Datskos, P.G.; Datskou, I.; Evans, B.M., III; Rajic, S.

    1999-07-18

    The Oak Ridge National Laboratory has been instrumental in developing ultraprecision technologies for the fabrication of optical devices. We are currently extending our ultraprecision capabilities to the design, fabrication, and testing of micro-optics and MEMS devices. Techniques have been developed in our lab for fabricating micro-devices using single point diamond turning and ion milling. The devices we fabricated can be used in micro-scale interferometry, micro-positioners, micro-mirrors, and chemical sensors. In this paper, we focus on the optimization of microstructure performance using finite element analysis and the experimental validation of those results. We also discuss the fabrication of such structures and the optical testing of the devices. The performance is simulated using finite element analysis to optimize geometric and material parameters. The parameters we studied include bimaterial coating thickness effects; device length, width, and thickness effects, as well as changes in the geometry itself. This optimization results in increased sensitivity of these structures to absorbed incoming energy, which is important for photon detection or micro-mirror actuation. We have investigated and tested multiple geometries. The devices were fabricated using focused ion beam milling, and their response was measured using a chopped photon source and laser triangulation techniques. Our results are presented and discussed.

  13. Optimal performance of regenerative cryocoolers

    NASA Astrophysics Data System (ADS)

    de Boer, P. C. T.

    2011-02-01

    The key component of a regenerative cryocooler is its regenerative heat exchanger. This device is subject to losses due to imperfect heat transfer between the regenerator material and the gas, as well as due to viscous dissipation. The relative magnitudes of these losses can be characterized by the ratio of the Stanton number St to the Fanning friction factor f. Using available data for the ratio St/ f, results are developed for the optimal cooling rate and Carnot efficiency. The variations of pressure and temperature are taken to be sinusoidal in time, and to have small amplitudes. The results are applied to the case of the Stirling cryocooler, with flow being generated by pistons at both sides of the regenerator. The performance is found to be close to optimal at large ratio of the warm space volume to the regenerator void volume. The results are also applied to the Orifice Pulse Tube Refrigerator. In this case, optimal performance additionally requires a large ratio of the regenerator void volume to the cold space volume.

  14. Optimal control of overdamped systems

    NASA Astrophysics Data System (ADS)

    Zulkowski, Patrick R.; DeWeese, Michael R.

    2015-09-01

    Nonequilibrium physics encompasses a broad range of natural and synthetic small-scale systems. Optimizing transitions of such systems will be crucial for the development of nanoscale technologies and may reveal the physical principles underlying biological processes at the molecular level. Recent work has demonstrated that when a thermodynamic system is driven away from equilibrium then the space of controllable parameters has a Riemannian geometry induced by a generalized inverse diffusion tensor. We derive a simple, compact expression for the inverse diffusion tensor that depends solely on equilibrium information for a broad class of potentials. We use this formula to compute the minimal dissipation for two model systems relevant to small-scale information processing and biological molecular motors. In the first model, we optimally erase a single classical bit of information modeled by an overdamped particle in a smooth double-well potential. In the second model, we find the minimal dissipation of a simple molecular motor model coupled to an optical trap. In both models, we find that the minimal dissipation for the optimal protocol of duration τ is proportional to 1 /τ , as expected, though the dissipation for the erasure model takes a different form than what we found previously for a similar system.

  15. Feasible optimality implies Hack's Law

    NASA Astrophysics Data System (ADS)

    Rigon, Riccardo; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea

    1998-11-01

    We analyze the elongation (the scaling properties of drainage area with mainstream length) in optimal channel networks (OCNs) obtained through different algorithms searching for the minimum of a functional computing the total energy dissipation of the drainage system. The algorithms have different capabilities to overcome the imprinting of initial and boundary conditions, and thus they have different chances of attaining the global optimum. We find that suboptimal shapes, i.e., dynamically accessible states characterized by locally stationary total potential energy, show the robust type of elongation that is consistently observed in nature. This suggestive and directly measurable property is not found in the so-called ground state, i.e., the global minimum, whose features, including elongation, are known exactly. The global minimum is shown to be too regular and symmetric to be dynamically accessible in nature, owing to features and constraints of erosional processes. Thus Hack's law is seen as a signature of feasible optimality thus yielding further support to the suggestion that optimality of the system as a whole explains the dynamic origin of fractal forms in nature.

  16. MOSCITO: a program system for MEMS optimization

    NASA Astrophysics Data System (ADS)

    Schneider, Peter; Schneider, Andre; Bastian, J.; Reitz, S.; Schwarz, Peter

    2002-04-01

    Computer aided MEMS optimization regarding performance, power consumption, and reliability is an important design task due to high prototyping costs. In the MEMS design flow, a variety of specialized tools is available. FEM tools (e.g. ANSYS, CFD-ACE+) are widely used for simulation on component level. Simulations on system level are carried out with simplified models using simulators like Saber, ELDO, or Spice. A few simulators offer too-specific optimization capabilities but there is a lack of simulator-independent support of MEMS optimization. The paper presents a modular approach for simulation-based optimization, which aims at a flexible combination of simulators and optimization algorithms by partitioning the optimization cycle into separate modules for model generation, simulation, error calculation, and optimization. Available optimization algorithms include direct and indirect methods as well as stochastic approaches. Interfaces to the simulators ANSYS, ELDO, Saber, MATLAB, and SPICE are implemented. Thus the optimization task can be solved on different levels of model abstraction (FEM, ordinary differential equations, generalized networks...). A graphical user interface (GUI) supports control and visualization of the optimization progress. The modules of the optimization system may communicate via the internet (web-based optimization, distributed optimization).

  17. Optimal scaling in ductile fracture

    NASA Astrophysics Data System (ADS)

    Fokoua Djodom, Landry

    This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity

  18. Offshore wind farm layout optimization

    NASA Astrophysics Data System (ADS)

    Elkinton, Christopher Neil

    Offshore wind energy technology is maturing in Europe and is poised to make a significant contribution to the U.S. energy production portfolio. Building on the knowledge the wind industry has gained to date, this dissertation investigates the influences of different site conditions on offshore wind farm micrositing---the layout of individual turbines within the boundaries of a wind farm. For offshore wind farms, these conditions include, among others, the wind and wave climates, water depths, and soil conditions at the site. An analysis tool has been developed that is capable of estimating the cost of energy (COE) from offshore wind farms. For this analysis, the COE has been divided into several modeled components: major costs (e.g. turbines, electrical interconnection, maintenance, etc.), energy production, and energy losses. By treating these component models as functions of site-dependent parameters, the analysis tool can investigate the influence of these parameters on the COE. Some parameters result in simultaneous increases of both energy and cost. In these cases, the analysis tool was used to determine the value of the parameter that yielded the lowest COE and, thus, the best balance of cost and energy. The models have been validated and generally compare favorably with existing offshore wind farm data. The analysis technique was then paired with optimization algorithms to form a tool with which to design offshore wind farm layouts for which the COE was minimized. Greedy heuristic and genetic optimization algorithms have been tuned and implemented. The use of these two algorithms in series has been shown to produce the best, most consistent solutions. The influences of site conditions on the COE have been studied further by applying the analysis and optimization tools to the initial design of a small offshore wind farm near the town of Hull, Massachusetts. The results of an initial full-site analysis and optimization were used to constrain the boundaries of

  19. Shape optimization of peristaltic pumping

    NASA Astrophysics Data System (ADS)

    Walker, Shawn W.; Shelley, Michael J.

    2010-02-01

    Transport is a fundamental aspect of biology and peristaltic pumping is a fundamental mechanism to accomplish this; it is also important to many industrial processes. We present a variational method for optimizing the wave shape of a peristaltic pump. Specifically, we optimize the wave profile of a two dimensional channel containing a Navier-Stokes fluid with no assumption on the wave profile other than it is a traveling wave (e.g. we do not assume it is the graph of a function). Hence, this is an infinite-dimensional optimization problem. The optimization criteria consists of minimizing the input fluid power (due to the peristaltic wave) subject to constraints on the average flux of fluid and area of the channel. Sensitivities of the cost and constraints are computed variationally via shape differential calculus and we use a sequential quadratic programming (SQP) method to find a solution of the first order KKT conditions. We also use a merit-function based line search in order to balance between decreasing the cost and keeping the constraints satisfied when updating the channel shape. Our numerical implementation uses a finite element method for computing a solution of the Navier-Stokes equations, adjoint equations, as well as for the SQP method when computing perturbations of the channel shape. The walls of the channel are deformed by an explicit front-tracking approach. In computing functional sensitivities with respect to shape, we use L2-type projections for computing boundary stresses and for geometric quantities such as the tangent field on the channel walls and the curvature; we show error estimates for the boundary stress and tangent field approximations. As a result, we find optimized shapes that are not obvious and have not been previously reported in the peristaltic pumping literature. Specifically, we see highly asymmetric wave shapes that are far from being sine waves. Many examples are shown for a range of fluxes and Reynolds numbers up to Re=500

  20. Optimization Program for Drinking Water Systems

    EPA Pesticide Factsheets

    The Area-Wide Optimization Program (AWOP) provides tools and approaches for drinking water systems to meet water quality optimization goals and provide an increased – and sustainable – level of public health protection to their consumers.

  1. Roadmap to Long-Term Monitoring Optimization

    EPA Pesticide Factsheets

    This roadmap focuses on optimization of established long-term monitoring programs for groundwater. Tools and techniques discussed concentrate on methods for optimizing the monitoring frequency and spatial (three-dimensional) distribution of wells ...

  2. Risk Analysis for Resource Planning Optimization

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming

    2008-01-01

    This paper describes a systems engineering approach to resource planning by integrating mathematical modeling and constrained optimization, empirical simulation, and theoretical analysis techniques to generate an optimal task plan in the presence of uncertainties.

  3. A Framework for Designing Optimal Spacecraft Formations

    DTIC Science & Technology

    2002-09-01

    3 1. Reference Frame ..................................................................................6 B. SOLVING OPTIMAL CONTROL PROBLEMS ........................................7...spacecraft state. Depending on the model, there may be additional variables in the state, but there will be a minimum of these six. B. SOLVING OPTIMAL CONTROL PROBLEMS Until

  4. Helicopter mission optimization study. [portable computer technology for flight optimization

    NASA Technical Reports Server (NTRS)

    Olson, J. R.

    1978-01-01

    The feasibility of using low-cost, portable computer technology to help a helicopter pilot optimize flight parameters to minimize fuel consumption and takeoff and landing noise was demonstrated. Eight separate computer programs were developed for use in the helicopter cockpit using a hand-held computer. The programs provide the helicopter pilot with the ability to calculate power required, minimum fuel consumption for both range and endurance, maximum speed and a minimum noise profile for both takeoff and landing. Each program is defined by a maximum of two magnetic cards. The helicopter pilot is required to key in the proper input parameter such as gross weight, outside air temperature or pressure altitude.

  5. HOPSPACK: Hybrid Optimization Parallel Search Package.

    SciTech Connect

    Gray, Genetha Anne.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica L.

    2008-12-01

    In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

  6. Multiobjective Topology Optimization of Energy Absorbing Materials

    DTIC Science & Technology

    2015-08-01

    overlapping function. This data structure is tree-shaped and so genetic programming is used as the optimizer. The forward problem is solved with a...strain energy. Results demonstrate the efficacy of the proposed algorithm. 15. SUBJECT TERMS topology optimization; Pareto optimization; genetic ...combined using an overlapping function. This data structure is tree-shaped and so genetic programming is used as the optimizer. The forward problem

  7. Design of Optimal Cyclers Using Solar Sails

    DTIC Science & Technology

    2002-12-01

    necessary (but not sufficient ) conditions for optimality in these cases. Moreover, the optimal control solution is the one where H is minimized...problems, 1H = − for all time. The first and second order necessary (but not sufficient ) conditions for optimality using the Hamiltonian are written as... optimization and the initial conditions , the path of the sail could be propagated by means of a numeric ordinary differential equation solver on the non

  8. Optimization of modified volume Fresnel zone plates.

    PubMed

    Srisungsitthisunti, Pornsak; Ersoy, Okan K; Xu, Xianfan

    2009-10-01

    Modified volume Fresnel zone plates (MVFZPs) fabricated with laser direct writing were optimized for higher diffraction efficiencies. The Fresnel radii in each layer of a volume zone plate were iteratively adjusted by a simulation-based direct search optimization. The results show that optimization is effective but depends strongly on the starting diffraction efficiencies determined by the MVFZP parameters. The simulations indicate that the optimized MVFZP can achieve 93% diffraction efficiency.

  9. Program Aids Analysis And Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1994-01-01

    NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.

  10. Geometric Computational Mechanics and Optimal Control

    DTIC Science & Technology

    2011-12-02

    methods. Further methods that depend on global optimization problems are in development and preliminary versions of these results, many of which...de la Sociedad Espanola de Matimatica Aplicada (SeMA), 50, 2010, pp 61-81. K. Flaßkamp, S. Ober-Blöbaum, M. Kobilarov, Solving optimal control...continuous setting. Consequently, globally optimal methods for computing optimal trajectories for vehicles with complex dynamics were developed. The

  11. Optimizing Dynamical Network Structure for Pinning Control

    NASA Astrophysics Data System (ADS)

    Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo

    2016-04-01

    Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.

  12. Enhanced Ocean Predictability Through Optimal Observing Strategies

    DTIC Science & Technology

    2003-09-30

    oceanographic applications. Second, use these methods to design optimal observing strategies with special emphasis on drifter deployments that achieve...to assess the predictability of the optimal deployment strategy . The original idea was to use ensemble methods for this analysis. However, a...Enhanced Ocean Predictability Through Optimal Observing Strategies PI: A. D. Kirwan, Jr. College of Marine Studies, University of Delaware

  13. An algorithm for LQ optimal actuator location

    NASA Astrophysics Data System (ADS)

    Darivandi, Neda; Morris, Kirsten; Khajepour, Amir

    2013-03-01

    The locations of the control hardware are typically a design variable in controller design for distributed parameter systems. In order to obtain the most efficient control system, the locations of control hardware as well as the feedback gain should be optimized. These optimization problems are generally non-convex. In addition, the models for these systems typically have a large number of degrees of freedom. Consequently, existing optimization schemes for optimal actuator placement may be inaccurate or computationally impractical. In this paper, the feedback control is chosen to be an optimal linear quadratic regulator. The optimal actuator location problem is reformulated as a convex optimization problem. A subgradient-based optimization scheme which leads to the global solution of the problem is used to optimize actuator locations. The optimization algorithm is applied to optimize the placement of piezoelectric actuators in vibration control of flexible structures. This method is compared with a genetic algorithm, and is observed to be faster and more accurate. Experiments are performed to verify the efficacy of optimal actuator placement.

  14. Educational Optimism among Parents: A Pilot Study

    ERIC Educational Resources Information Center

    Räty, Hannu; Kasanen, Kati

    2016-01-01

    This study explored parents' (N = 351) educational optimism in terms of their trust in the possibilities of school to develop children's intelligence. It was found that educational optimism could be depicted as a bipolar factor with optimism and pessimism on the opposing ends of the same dimension. Optimistic parents indicated more satisfaction…

  15. Optimal Designs for the Rasch Model

    ERIC Educational Resources Information Center

    Grasshoff, Ulrike; Holling, Heinz; Schwabe, Rainer

    2012-01-01

    In this paper, optimal designs will be derived for estimating the ability parameters of the Rasch model when difficulty parameters are known. It is well established that a design is locally D-optimal if the ability and difficulty coincide. But locally optimal designs require that the ability parameters to be estimated are known. To attenuate this…

  16. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  17. Merits and limitations of optimality criteria method for structural optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo

    1993-01-01

    The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.

  18. Optimization and geophysical inverse problems

    SciTech Connect

    Barhen, J.; Berryman, J.G.; Borcea, L.; Dennis, J.; de Groot-Hedlin, C.; Gilbert, F.; Gill, P.; Heinkenschloss, M.; Johnson, L.; McEvilly, T.; More, J.; Newman, G.; Oldenburg, D.; Parker, P.; Porto, B.; Sen, M.; Torczon, V.; Vasco, D.; Woodward, N.B.

    2000-10-01

    A fundamental part of geophysics is to make inferences about the interior of the earth on the basis of data collected at or near the surface of the earth. In almost all cases these measured data are only indirectly related to the properties of the earth that are of interest, so an inverse problem must be solved in order to obtain estimates of the physical properties within the earth. In February of 1999 the U.S. Department of Energy sponsored a workshop that was intended to examine the methods currently being used to solve geophysical inverse problems and to consider what new approaches should be explored in the future. The interdisciplinary area between inverse problems in geophysics and optimization methods in mathematics was specifically targeted as one where an interchange of ideas was likely to be fruitful. Thus about half of the participants were actively involved in solving geophysical inverse problems and about half were actively involved in research on general optimization methods. This report presents some of the topics that were explored at the workshop and the conclusions that were reached. In general, the objective of a geophysical inverse problem is to find an earth model, described by a set of physical parameters, that is consistent with the observational data. It is usually assumed that the forward problem, that of calculating simulated data for an earth model, is well enough understood so that reasonably accurate synthetic data can be generated for an arbitrary model. The inverse problem is then posed as an optimization problem, where the function to be optimized is variously called the objective function, misfit function, or fitness function. The objective function is typically some measure of the difference between observational data and synthetic data calculated for a trial model. However, because of incomplete and inaccurate data, the objective function often incorporates some additional form of regularization, such as a measure of smoothness

  19. Optimizing High Level Waste Disposal

    SciTech Connect

    Dirk Gombert

    2005-09-01

    If society is ever to reap the potential benefits of nuclear energy, technologists must close the fuel-cycle completely. A closed cycle equates to a continued supply of fuel and safe reactors, but also reliable and comprehensive closure of waste issues. High level waste (HLW) disposal in borosilicate glass (BSG) is based on 1970s era evaluations. This host matrix is very adaptable to sequestering a wide variety of radionuclides found in raffinates from spent fuel reprocessing. However, it is now known that the current system is far from optimal for disposal of the diverse HLW streams, and proven alternatives are available to reduce costs by billions of dollars. The basis for HLW disposal should be reassessed to consider extensive waste form and process technology research and development efforts, which have been conducted by the United States Department of Energy (USDOE), international agencies and the private sector. Matching the waste form to the waste chemistry and using currently available technology could increase the waste content in waste forms to 50% or more and double processing rates. Optimization of the HLW disposal system would accelerate HLW disposition and increase repository capacity. This does not necessarily require developing new waste forms, the emphasis should be on qualifying existing matrices to demonstrate protection equal to or better than the baseline glass performance. Also, this proposed effort does not necessarily require developing new technology concepts. The emphasis is on demonstrating existing technology that is clearly better (reliability, productivity, cost) than current technology, and justifying its use in future facilities or retrofitted facilities. Higher waste processing and disposal efficiency can be realized by performing the engineering analyses and trade-studies necessary to select the most efficient methods for processing the full spectrum of wastes across the nuclear complex. This paper will describe technologies being

  20. Optimality of human contour integration.

    PubMed

    Ernst, Udo A; Mandon, Sunita; Schinkel-Bielefeld, Nadja; Neitzel, Simon D; Kreiter, Andreas K; Pawelzik, Klaus R

    2012-01-01

    For processing and segmenting visual scenes, the brain is required to combine a multitude of features and sensory channels. It is neither known if these complex tasks involve optimal integration of information, nor according to which objectives computations might be performed. Here, we investigate if optimal inference can explain contour integration in human subjects. We performed experiments where observers detected contours of curvilinearly aligned edge configurations embedded into randomly oriented distractors. The key feature of our framework is to use a generative process for creating the contours, for which it is possible to derive a class of ideal detection models. This allowed us to compare human detection for contours with different statistical properties to the corresponding ideal detection models for the same stimuli. We then subjected the detection models to realistic constraints and required them to reproduce human decisions for every stimulus as well as possible. By independently varying the four model parameters, we identify a single detection model which quantitatively captures all correlations of human decision behaviour for more than 2000 stimuli from 42 contour ensembles with greatly varying statistical properties. This model reveals specific interactions between edges closely matching independent findings from physiology and psychophysics. These interactions imply a statistics of contours for which edge stimuli are indeed optimally integrated by the visual system, with the objective of inferring the presence of contours in cluttered scenes. The recurrent algorithm of our model makes testable predictions about the temporal dynamics of neuronal populations engaged in contour integration, and it suggests a strong directionality of the underlying functional anatomy.

  1. Energy optimization in DOD facilities

    SciTech Connect

    Roach, F.; Kirschner, C.; Salmon, R.

    1981-01-01

    A static linear programming formulation (management tool) of energy optimization problems on military bases has been developed to assist each of the military services in their planning activities and budgetary allocation decisions. Several objective functions have been defined, resulting in two types of model capabilities: minimization of capital costs (investments) subject to a number of energy and dollar constraints and the maximization of energy savings subject to capital and operating fund budget restrictions and minimum energy performance goals. The management tool defines various levels of aggregation in terms of: (1) geographical boundaries; (2) end-use energy demand; (3) building type characteristics; (4) conservation options; (5) renewable energy and alternative fuel technologies; and (6) a limited set of advanced energy technology options. Both a technical description of and a user's guide to the principal model components and operational attributes of the constructed DOD energy optimization model are presently being prepared. Two key questions are briefly reviewed within the context of preliminary results obtained from application of the developed model to two Air Force Logistics Command installations: (1) the geographical distribution of military construction dollars under a set of budgetary and energy performance constraints; and (2) the selection of energy supply technologies - conventional conservation, renewable, and advanced - that simultaneously meet demand at least cost and satisfy a set of conflicting energy and budgetary goals. Temporal aspects of the problem are handled on a year-by-year basis, with information from a previous year's optimal investment and associated energy savings included in each succeeding year's decision criteria. Benefits and costs of the budgetary and energy allocation results are evaluated as part of the allocation decisions.

  2. Treelength Optimization for Phylogeny Estimation

    PubMed Central

    Liu, Kevin; Warnow, Tandy

    2012-01-01

    The standard approach to phylogeny estimation uses two phases, in which the first phase produces an alignment on a set of homologous sequences, and the second phase estimates a tree on the multiple sequence alignment. POY, a method which seeks a tree/alignment pair minimizing the total treelength, is the most widely used alternative to this two-phase approach. The topological accuracy of trees computed under treelength optimization is, however, controversial. In particular, one study showed that treelength optimization using simple gap penalties produced poor trees and alignments, and suggested the possibility that if POY were used with an affine gap penalty, it might be able to be competitive with the best two-phase methods. In this paper we report on a study addressing this possibility. We present a new heuristic for treelength, called BeeTLe (Better Treelength), that is guaranteed to produce trees at least as short as POY. We then use this heuristic to analyze a large number of simulated and biological datasets, and compare the resultant trees and alignments to those produced using POY and also maximum likelihood (ML) and maximum parsimony (MP) trees computed on a number of alignments. In general, we find that trees produced by BeeTLe are shorter and more topologically accurate than POY trees, but that neither POY nor BeeTLe produces trees as topologically accurate as ML trees produced on standard alignments. These findings, taken as a whole, suggest that treelength optimization is not as good an approach to phylogenetic tree estimation as maximum likelihood based upon good alignment methods. PMID:22442677

  3. Hubble Systems Optimize Hospital Schedules

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Don Rosenthal, a former Ames Research Center computer scientist who helped design the Hubble Space Telescope's scheduling software, co-founded Allocade Inc. of Menlo Park, California, in 2004. Allocade's OnCue software helps hospitals reclaim unused capacity and optimize constantly changing schedules for imaging procedures. After starting to use the software, one medical center soon reported noticeable improvements in efficiency, including a 12 percent increase in procedure volume, 35 percent reduction in staff overtime, and significant reductions in backlog and technician phone time. Allocade now offers versions for outpatient and inpatient magnetic resonance imaging (MRI), ultrasound, interventional radiology, nuclear medicine, Positron Emission Tomography (PET), radiography, radiography-fluoroscopy, and mammography.

  4. Optimization of hydraulic turbine diffuser

    NASA Astrophysics Data System (ADS)

    Moravec, Prokop; Hliník, Juraj; Rudolf, Pavel

    2016-03-01

    Hydraulic turbine diffuser recovers pressure energy from residual kinetic energy on turbine runner outlet. Efficiency of this process is especially important for high specific speed turbines, where almost 50% of available head is utilized within diffuser. Magnitude of the coefficient of pressure recovery can be significantly influenced by designing its proper shape. Present paper focuses on mathematical shape optimization method coupled with CFD. First method is based on direct search Nelder-Mead algorithm, while the second method employs adjoint solver and morphing. Results obtained with both methods are discussed and their advantages/disadvantages summarized.

  5. Optimization of integrated polarization filters.

    PubMed

    Gagnon, Denis; Dumont, Joey; Déziel, Jean-Luc; Dubé, Louis J

    2014-10-01

    This study reports on the design of small footprint, integrated polarization filters based on engineered photonic lattices. Using a rods-in-air lattice as a basis for a TE filter and a holes-in-slab lattice for the analogous TM filter, we are able to maximize the degree of polarization of the output beams up to 98% with a transmission efficiency greater than 75%. The proposed designs allow not only for logical polarization filtering, but can also be tailored to output an arbitrary transverse beam profile. The lattice configurations are found using a recently proposed parallel tabu search algorithm for combinatorial optimization problems in integrated photonics.

  6. Optimal broadcasting of mixed states

    SciTech Connect

    Dang Guifang; Fan Heng

    2007-08-15

    The N to M (M{>=}N) universal quantum broadcasting of mixed states {rho}{sup xN} is proposed for a qubit system. The broadcasting of mixed states is universal and optimal in the sense that the shrinking factor is independent of the input state and achieves the upper bound. The quantum broadcasting of mixed qubits is a generalization of the universal quantum cloning machine for identical pure input states. A pure state decomposition of the identical mixed qubits {rho}{sup xN} is obtained.

  7. Optimization of spatial complex networks

    NASA Astrophysics Data System (ADS)

    Guillier, S.; Muñoz, V.; Rogan, J.; Zarama, R.; Valdivia, J. A.

    2017-02-01

    First, we estimate the connectivity properties of a predefined (fixed node locations) spatial network which optimizes a connectivity functional that balances construction and transportation costs. In this case we obtain a Gaussian distribution for the connectivity. However, when we consider these spatial networks in a growing process, we obtain a power law distribution for the connectivity. If the transportation costs in the functional involve the shortest geometrical path, we obtain a scaling exponent γ = 2.5. However, if the transportation costs in the functional involve just the shortest path, we obtain γ = 2.2. Both cases may be useful to analyze in some real networks.

  8. Pilot-optimal augmentation synthesis

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.

    1978-01-01

    An augmentation synthesis method usable in the absence of quantitative handling qualities specifications, and yet explicitly including design objectives based on pilot-rating concepts, is presented. The algorithm involves the unique approach of simultaneously solving for the stability augmentation system (SAS) gains, pilot equalization and pilot rating prediction via optimal control techniques. Simultaneous solution is required in this case since the pilot model (gains, etc.) depends upon the augmented plant dynamics, and the augmentation is obviously not a priori known. Another special feature is the use of the pilot's objective function (from which the pilot model evolves) to design the SAS.

  9. Promoting Optimal Care in Childbirth

    PubMed Central

    Lothian, Judith A.

    2014-01-01

    In 1996, the World Health Organization set out guidelines for normal birth. Because that time birth in the United States has continued to be intervention intensive, the cesarean rate has skyrocketed and maternal mortality, although low, is rising. At the same time, research continues to provide evidence for the benefits of supporting the normal physiologic process of labor and birth and the risks of interfering with this natural process. This article reviews the current state of U.S. maternity care and discusses research and advocacy efforts that address this issue. This article describes optimal care in childbirth and introduces the Lamaze International Six Healthy Birth Practices. PMID:25411536

  10. Manifold Learning by Graduated Optimization.

    PubMed

    Gashler, M; Ventura, D; Martinez, T

    2011-12-01

    We present an algorithm for manifold learning called manifold sculpting , which utilizes graduated optimization to seek an accurate manifold embedding. An empirical analysis across a wide range of manifold problems indicates that manifold sculpting yields more accurate results than a number of existing algorithms, including Isomap, locally linear embedding (LLE), Hessian LLE (HLLE), and landmark maximum variance unfolding (L-MVU), and is significantly more efficient than HLLE and L-MVU. Manifold sculpting also has the ability to benefit from prior knowledge about expected results.

  11. Optimal breast cancer pathology manifesto.

    PubMed

    Tot, T; Viale, G; Rutgers, E; Bergsten-Nordström, E; Costa, A

    2015-11-01

    This manifesto was prepared by a European Breast Cancer (EBC) Council working group and launched at the European Breast Cancer Conference in Glasgow on 20 March 2014. It sets out optimal technical and organisational requirements for a breast cancer pathology service, in the light of concerns about variability and lack of patient-centred focus. It is not a guideline about how pathology services should be performed. It is a call for all in the cancer community--pathologists, oncologists, patient advocates, health administrators and policymakers--to check that services are available that serve the needs of patients in a high quality, timely way.

  12. Practical Aspects of Nonlinear Optimization.

    DTIC Science & Technology

    1981-06-19

    14. E. Levitan and B . Polyak, "Constrained Minimization Methods", USSR Comp. Math. and Math. Physics 6, 1, (1966). 15. J. May, "Solving Nonlinear...AD-AIO 858 MASSACHUSETTS INST OF TECH LEXINGTON LINCOLN LAB F/G 12/1 PRACTICAL ASPECTS OF NONLINEAR OPTIMIZATION.U) JUN 81 R B HOLMES, J W TOLLESON...dj, l<j< m , (2) with the understanding the Q so defined has a non-empty interior (is "solid"). No qualitative assumptions on the objective - i

  13. Optimal Implantable Cardioverter Defibrillator Programming.

    PubMed

    Shah, Bindi K

    Optimal programming of implantable cardioverter defibrillators (ICDs) is essential to appropriately treat ventricular tachyarrhythmias and to avoid unnecessary and inappropriate shocks. There have been a series of large clinical trials evaluating tailored programming of ICDs. We reviewed the clinical trials evaluating ICD therapies and detection, and the consensus statement on ICD programming. In doing so, we found that prolonged ICD detection times, higher rate cutoffs, and antitachycardia pacing (ATP) programming decreases inappropriate and painful therapies in a primary prevention population. The use of supraventricular tachyarrhythmia discriminators can also decrease inappropriate shocks. Tailored ICD programming using the knowledge gained from recent ICD trials can decrease inappropriate and unnecessary ICD therapies and decrease mortality.

  14. Optimal Implantable Cardioverter Defibrillator Programming.

    PubMed

    Shah, Bindi K

    2016-11-17

    Optimal programming of implantable cardioverter defibrillators (ICDs) is essential to appropriately treat ventricular tachyarrhythmias and to avoid unnecessary and inappropriate shocks. There have been a series of large clinical trials evaluating tailored programming of ICDs. We reviewed the clinical trials evaluating ICD therapies and detection, as well as the consensus statement on ICD programming. In so doing, we found that prolonged ICD detection times, higher rate cutoffs, and antitachycardia pacing programming decreases inappropriate and painful therapies in a primary prevention population. The use of supraventricular tachyarrhythmia discriminators can also decrease inappropriate shocks. Tailored ICD programming using the knowledge gained from recent ICD trials can decrease inappropriate and unnecessary ICD therapies, and decrease mortality.

  15. Surface passivation optimization using DIRECT

    NASA Astrophysics Data System (ADS)

    Graf, Peter A.; Kim, Kwiseon; Jones, Wesley B.; Wang, Lin-Wang

    2007-06-01

    We describe a systematic and efficient method of determining pseudo-atom positions and potentials for use in nanostructure calculations based on bulk empirical pseudopotentials (EPMs). Given a bulk EPM for binary semiconductor X, we produce parameters for pseudo-atoms necessary to passivate a nanostructure of X in preparation for quantum mechanical electronic structure calculations. These passivants are based on the quality of the wave functions of a set of small test structures that include the passivants. Our method is based on the global optimization method DIRECT. It enables and/or streamlines surface passivation for empirical pseudopotential calculations.

  16. Optimized microsystems-enabled photovoltaics

    DOEpatents

    Cruz-Campa, Jose Luis; Nielson, Gregory N.; Young, Ralph W.; Resnick, Paul J.; Okandan, Murat; Gupta, Vipin P.

    2015-09-22

    Technologies pertaining to designing microsystems-enabled photovoltaic (MEPV) cells are described herein. A first restriction for a first parameter of an MEPV cell is received. Subsequently, a selection of a second parameter of the MEPV cell is received. Values for a plurality of parameters of the MEPV cell are computed such that the MEPV cell is optimized with respect to the second parameter, wherein the values for the plurality of parameters are computed based at least in part upon the restriction for the first parameter.

  17. [Considerations for optimizing joint implants].

    PubMed

    Tensi, H M; Orloff, S; Gese, H; Hooputra, H

    1994-09-01

    Despite the increasing use of orthopaedic implants, there is still a lack of adequate testing procedures and legal guidelines. Examples of the consequences of this neglect are given. Modern techniques for the calculation of stresses (finite element method [FEM]) and the prediction of life cycle duration are presented. Such methods, applied in the development and manufacturing phases of standard and special implants, may ensure an adequate prosthetic life cycle, with particular emphasis being placed on the biomedical optimization of the implant/bone interface and surrounding bone.

  18. NBODY - a multipurpose trajectory optimization computer program

    NASA Technical Reports Server (NTRS)

    Strack, W. C.

    1974-01-01

    Documentation of the NBODY trajectory optimization program is presented in the form of a mathematical development plus a user's manual. Optimal multistage-launch ascent trajectories may be determined by variational thrust steering during the upper phase. Optimal low-thrust interplanetary spacecraft trajectories may also be calculated with solar power or constant power, all-propulsion or embedded coast arcs, fixed or optimal thrust angles, and a variety of terminal end conditions. A hybrid iteration scheme solves the boundary-value problem, while either transversality conditions or a univariate search scheme optimize vehicle or trajectory parameters.

  19. Integrated multidisciplinary design optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Mantay, Wayne R.

    1989-01-01

    The NASA/Army research plan for developing the logic elements for helicopter rotor design optimization by integrating appropriate disciplines and accounting for important interactions among the disciplines is discussed. The optimization formulation is described in terms of the objective function, design variables, and constraints. The analysis aspects are discussed, and an initial effort at defining the interdisciplinary coupling is summarized. Results are presented on the achievements made in the rotor dynamic optimization for vibration reduction, rotor structural optimization for minimum weight, and integrated aerodynamic load/dynamics optimization for minimum vibration and weight.

  20. Product Distributions for Distributed Optimization. Chapter 1

    NASA Technical Reports Server (NTRS)

    Bieniawski, Stefan R.; Wolpert, David H.

    2004-01-01

    With connections to bounded rational game theory, information theory and statistical mechanics, Product Distribution (PD) theory provides a new framework for performing distributed optimization. Furthermore, PD theory extends and formalizes Collective Intelligence, thus connecting distributed optimization to distributed Reinforcement Learning (FU). This paper provides an overview of PD theory and details an algorithm for performing optimization derived from it. The approach is demonstrated on two unconstrained optimization problems, one with discrete variables and one with continuous variables. To highlight the connections between PD theory and distributed FU, the results are compared with those obtained using distributed reinforcement learning inspired optimization approaches. The inter-relationship of the techniques is discussed.

  1. Structural optimization using Newton Modified Barrier Method

    NASA Astrophysics Data System (ADS)

    Khot, N. S.; Polyak, R.; Schneur, R.

    1992-09-01

    The Newton Modified Barrier Method (NMBM) was applied to a structural optimization problem with large numbers of design variables and constraints. This mathematical optimization algorithm was based on Modified Barrier Function (MBF) theory and the global converging step version of the Newton Method for smooth unconstrained optimization. For illustrating the convergence characteristics of this method to structural optimization, a truss structure with 721 design variables with constraints on displacements and minimum size requirements was solved. The convergence to the optimum was found to be monotonic. The rate of convergence was compared with solving the same problem with ASTROS and optimality criteria approach.

  2. Enhanced ant colony optimization for multiscale problems

    NASA Astrophysics Data System (ADS)

    Hu, Nan; Fish, Jacob

    2016-03-01

    The present manuscript addresses the issue of computational complexity of optimizing nonlinear composite materials and structures at multiple scales. Several solutions are detailed to meet the enormous computational challenge of optimizing nonlinear structures at multiple scales including: (i) enhanced sampling procedure that provides superior performance of the well-known ant colony optimization algorithm, (ii) a mapping-based meshing of a representative volume element that unlike unstructured meshing permits sensitivity analysis on coarse meshes, and (iii) a multilevel optimization procedure that takes advantage of possible weak coupling of certain scales. We demonstrate the proposed optimization procedure on elastic and inelastic laminated plates involving three scales.

  3. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  4. Optimal design of compact spur gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lattime, S. B.; Kimmel, J. A.; Coe, H. H.

    1992-01-01

    The optimal design of compact spur gear reductions includes the selection of bearing and shaft proportions in addition to gear mesh parameters. Designs for single mesh spur gear reductions are based on optimization of system life, system volume, and system weight including gears, support shafts, and the four bearings. The overall optimization allows component properties to interact, yielding the best composite design. A modified feasible directions search algorithm directs the optimization through a continuous design space. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for optimization. After finding the continuous optimum, the designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearings on the optimal configurations.

  5. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  6. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  7. Evolutionary Optimization of Protein Folding

    PubMed Central

    Debès, Cédric; Wang, Minglei; Caetano-Anollés, Gustavo; Gräter, Frauke

    2013-01-01

    Nature has shaped the make up of proteins since their appearance, 3.8 billion years ago. However, the fundamental drivers of structural change responsible for the extraordinary diversity of proteins have yet to be elucidated. Here we explore if protein evolution affects folding speed. We estimated folding times for the present-day catalog of protein domains directly from their size-modified contact order. These values were mapped onto an evolutionary timeline of domain appearance derived from a phylogenomic analysis of protein domains in 989 fully-sequenced genomes. Our results show a clear overall increase of folding speed during evolution, with known ultra-fast downhill folders appearing rather late in the timeline. Remarkably, folding optimization depends on secondary structure. While alpha-folds showed a tendency to fold faster throughout evolution, beta-folds exhibited a trend of folding time increase during the last 1.5 billion years that began during the “big bang” of domain combinations. As a consequence, these domain structures are on average slow folders today. Our results suggest that fast and efficient folding of domains shaped the universe of protein structure. This finding supports the hypothesis that optimization of the kinetic and thermodynamic accessibility of the native fold reduces protein aggregation propensities that hamper cellular functions. PMID:23341762

  8. Optimal array of sand fences

    PubMed Central

    Lima, Izael A.; Araújo, Ascânio D.; Parteli, Eric J. R.; Andrade, José S.; Herrmann, Hans J.

    2017-01-01

    Sand fences are widely applied to prevent soil erosion by wind in areas affected by desertification. Sand fences also provide a way to reduce the emission rate of dust particles, which is triggered mainly by the impacts of wind-blown sand grains onto the soil and affects the Earth’s climate. Many different types of fence have been designed and their effects on the sediment transport dynamics studied since many years. However, the search for the optimal array of fences has remained largely an empirical task. In order to achieve maximal soil protection using the minimal amount of fence material, a quantitative understanding of the flow profile over the relief encompassing the area to be protected including all employed fences is required. Here we use Computational Fluid Dynamics to calculate the average turbulent airflow through an array of fences as a function of the porosity, spacing and height of the fences. Specifically, we investigate the factors controlling the fraction of soil area over which the basal average wind shear velocity drops below the threshold for sand transport when the fences are applied. We introduce a cost function, given by the amount of material necessary to construct the fences. We find that, for typical sand-moving wind velocities, the optimal fence height (which minimizes this cost function) is around 50 cm, while using fences of height around 1.25 m leads to maximal cost. PMID:28338053

  9. Optimizing adherence to antiretroviral therapy

    PubMed Central

    Sahay, Seema; Reddy, K. Srikanth; Dhayarkar, Sampada

    2011-01-01

    HIV has now become a manageable chronic disease. However, the treatment outcomes may get hampered by suboptimal adherence to ART. Adherence optimization is a concrete reality in the wake of ‘universal access’ and it is imperative to learn lessons from various studies and programmes. This review examines current literature on ART scale up, treatment outcomes of the large scale programmes and the role of adherence therein. Social, behavioural, biological and programme related factors arise in the context of ART adherence optimization. While emphasis is laid on adherence, retention of patients under the care umbrella emerges as a major challenge. An in-depth understanding of patients’ health seeking behaviour and health care delivery system may be useful in improving adherence and retention of patients in care continuum and programme. A theoretical framework to address the barriers and facilitators has been articulated to identify problematic areas in order to intervene with specific strategies. Empirically tested objective adherence measurement tools and approaches to assess adherence in clinical/ programme settings are required. Strengthening of ART programmes would include appropriate policies for manpower and task sharing, integrating traditional health sector, innovations in counselling and community support. Implications for the use of theoretical model to guide research, clinical practice, community involvement and policy as part of a human rights approach to HIV disease is suggested. PMID:22310817

  10. Optimal array of sand fences

    NASA Astrophysics Data System (ADS)

    Lima, Izael A.; Araújo, Ascânio D.; Parteli, Eric J. R.; Andrade, José S.; Herrmann, Hans J.

    2017-03-01

    Sand fences are widely applied to prevent soil erosion by wind in areas affected by desertification. Sand fences also provide a way to reduce the emission rate of dust particles, which is triggered mainly by the impacts of wind-blown sand grains onto the soil and affects the Earth’s climate. Many different types of fence have been designed and their effects on the sediment transport dynamics studied since many years. However, the search for the optimal array of fences has remained largely an empirical task. In order to achieve maximal soil protection using the minimal amount of fence material, a quantitative understanding of the flow profile over the relief encompassing the area to be protected including all employed fences is required. Here we use Computational Fluid Dynamics to calculate the average turbulent airflow through an array of fences as a function of the porosity, spacing and height of the fences. Specifically, we investigate the factors controlling the fraction of soil area over which the basal average wind shear velocity drops below the threshold for sand transport when the fences are applied. We introduce a cost function, given by the amount of material necessary to construct the fences. We find that, for typical sand-moving wind velocities, the optimal fence height (which minimizes this cost function) is around 50 cm, while using fences of height around 1.25 m leads to maximal cost.

  11. Optimization in fractional aircraft ownership

    NASA Astrophysics Data System (ADS)

    Septiani, R. D.; Pasaribu, H. M.; Soewono, E.; Fayalita, R. A.

    2012-05-01

    Fractional Aircraft Ownership is a new concept in flight ownership management system where each individual or corporation may own a fraction of an aircraft. In this system, the owners have privilege to schedule their flight according to their needs. Fractional management companies (FMC) manages all aspects of aircraft operations, including utilization of FMC's aircraft in combination of outsourced aircrafts. This gives the owners the right to enjoy the benefits of private aviations. However, FMC may have complicated business requirements that neither commercial airlines nor charter airlines faces. Here, optimization models are constructed to minimize the number of aircrafts in order to maximize the profit and to minimize the daily operating cost. In this paper, three kinds of demand scenarios are made to represent different flight operations from different types of fractional owners. The problems are formulated as an optimization of profit and a daily operational cost to find the optimum flight assignments satisfying the weekly and daily demand respectively from the owners. Numerical results are obtained by Genetic Algorithm method.

  12. The venom optimization hypothesis revisited.

    PubMed

    Morgenstern, David; King, Glenn F

    2013-03-01

    Animal venoms are complex chemical mixtures that typically contain hundreds of proteins and non-proteinaceous compounds, resulting in a potent weapon for prey immobilization and predator deterrence. However, because venoms are protein-rich, they come with a high metabolic price tag. The metabolic cost of venom is sufficiently high to result in secondary loss of venom whenever its use becomes non-essential to survival of the animal. The high metabolic cost of venom leads to the prediction that venomous animals may have evolved strategies for minimizing venom expenditure. Indeed, various behaviors have been identified that appear consistent with frugality of venom use. This has led to formulation of the "venom optimization hypothesis" (Wigger et al. (2002) Toxicon 40, 749-752), also known as "venom metering", which postulates that venom is metabolically expensive and therefore used frugally through behavioral control. Here, we review the available data concerning economy of venom use by animals with either ancient or more recently evolved venom systems. We conclude that the convergent nature of the evidence in multiple taxa strongly suggests the existence of evolutionary pressures favoring frugal use of venom. However, there remains an unresolved dichotomy between this economy of venom use and the lavish biochemical complexity of venom, which includes a high degree of functional redundancy. We discuss the evidence for biochemical optimization of venom as a means of resolving this conundrum.

  13. Optimal probabilistic dense coding schemes

    NASA Astrophysics Data System (ADS)

    Kögler, Roger A.; Neves, Leonardo

    2017-04-01

    Dense coding with non-maximally entangled states has been investigated in many different scenarios. We revisit this problem for protocols adopting the standard encoding scheme. In this case, the set of possible classical messages cannot be perfectly distinguished due to the non-orthogonality of the quantum states carrying them. So far, the decoding process has been approached in two ways: (i) The message is always inferred, but with an associated (minimum) error; (ii) the message is inferred without error, but only sometimes; in case of failure, nothing else is done. Here, we generalize on these approaches and propose novel optimal probabilistic decoding schemes. The first uses quantum-state separation to increase the distinguishability of the messages with an optimal success probability. This scheme is shown to include (i) and (ii) as special cases and continuously interpolate between them, which enables the decoder to trade-off between the level of confidence desired to identify the received messages and the success probability for doing so. The second scheme, called multistage decoding, applies only for qudits ( d-level quantum systems with d>2) and consists of further attempts in the state identification process in case of failure in the first one. We show that this scheme is advantageous over (ii) as it increases the mutual information between the sender and receiver.

  14. Wall to Wall Optimal Transport

    NASA Astrophysics Data System (ADS)

    Chini, Gregory P.; Hassanzadeh, Pedram; Doering, Charles R.

    2013-11-01

    How much heat can be transported between impermeable fixed-temperature walls by incompressible flows with a given amount of kinetic energy or enstrophy? What do the optimal velocity fields look like? We employ variational calculus to address these questions in the context of steady 2D flows. The resulting nonlinear Euler-Lagrange equations are solved numerically, and in some cases analytically, to find the maximum possible Nusselt number Nu as a function of the Péclect number Pe , a measure of the flow's energy or enstrophy. We find that in the fixed-energy problem Nu ~ Pe , while in the fixed-enstrophy problem Nu ~ Pe 10 / 17 . In both cases, the optimal flow consists of an array of convection cells with aspect ratio Γ (Pe) . Interpreting our results in terms of the Rayleigh number Ra for relevant buoyancy-driven problems, we find Nu <= 1 + 0 . 035 Ra and Γ ~ Ra - 1 / 2 for porous medium convection (which occurs with fixed energy), and Nu <= 1 + 0 . 115 Ra 5 / 12 and Γ ~ Ra - 1 / 4 for Rayleigh-Bénard convection (which occurs with fixed enstrophy and for free-slip walls). This work was supported by NSF awards PHY-0855335, DMS-0927587, and PHY-1205219 (CRD) and DMS-0928098 (GPC). Much of this work was completed at the 2012 Geophysical Fluid Dynamics (GFD) Program at Woods Hole Oceanographic Institution.

  15. Optimal array of sand fences.

    PubMed

    Lima, Izael A; Araújo, Ascânio D; Parteli, Eric J R; Andrade, José S; Herrmann, Hans J

    2017-03-24

    Sand fences are widely applied to prevent soil erosion by wind in areas affected by desertification. Sand fences also provide a way to reduce the emission rate of dust particles, which is triggered mainly by the impacts of wind-blown sand grains onto the soil and affects the Earth's climate. Many different types of fence have been designed and their effects on the sediment transport dynamics studied since many years. However, the search for the optimal array of fences has remained largely an empirical task. In order to achieve maximal soil protection using the minimal amount of fence material, a quantitative understanding of the flow profile over the relief encompassing the area to be protected including all employed fences is required. Here we use Computational Fluid Dynamics to calculate the average turbulent airflow through an array of fences as a function of the porosity, spacing and height of the fences. Specifically, we investigate the factors controlling the fraction of soil area over which the basal average wind shear velocity drops below the threshold for sand transport when the fences are applied. We introduce a cost function, given by the amount of material necessary to construct the fences. We find that, for typical sand-moving wind velocities, the optimal fence height (which minimizes this cost function) is around 50 cm, while using fences of height around 1.25 m leads to maximal cost.

  16. Optimal cue integration in ants

    PubMed Central

    Wystrach, Antoine; Mangan, Michael; Webb, Barbara

    2015-01-01

    In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy. PMID:26400741

  17. Optimal Defaults and Active Decisions*

    PubMed Central

    Carroll, Gabriel D.; Choi, James J.; Laibson, David; Madrian, Brigitte C.; Metrick, Andrew

    2009-01-01

    Defaults often have a large influence on consumer decisions. We identify an overlooked but practical alternative to defaults: requiring individuals to make an explicit choice for themselves. We study such “active decisions” in the context of 401(k) saving. We find that compelling new hires to make active decisions about 401(k) enrollment raises the initial fraction that enroll by 28 percentage points relative to a standard opt-in enrollment procedure, producing a savings distribution three months after hire that would take 30 months to achieve under standard enrollment. We also present a model of 401(k) enrollment and derive conditions under which the optimal enrollment regime is automatic enrollment (i.e., default enrollment), standard enrollment (i.e., default non-enrollment), or active decisions (i.e., no default and compulsory choice). Active decisions are optimal when consumers have a strong propensity to procrastinate and savings preferences are highly heterogeneous. Financial illiteracy, however, favors default enrollment over active decision enrollment. PMID:20041043

  18. Automatic discovery of optimal classes

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew

    1986-01-01

    A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.

  19. Optimal cue integration in ants.

    PubMed

    Wystrach, Antoine; Mangan, Michael; Webb, Barbara

    2015-10-07

    In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy.

  20. The Optimal Partial Transport Problem

    NASA Astrophysics Data System (ADS)

    Figalli, Alessio

    2010-02-01

    Given two densities f and g, we consider the problem of transporting a fraction {m in [0,min\\{\\|f\\|_{L^1},\\|g\\|_{L^1}\\}]} of the mass of f onto g minimizing a transportation cost. If the cost per unit of mass is given by | x - y|2, we will see that uniqueness of solutions holds for {m in [\\|fwedge g\\|_{L^1},min\\{\\|f\\|_{L^1},\\|g\\|_{L^1}\\}]} . This extends the result of C affarelli and M cCann in Ann Math (in print), where the authors consider two densities with disjoint supports. The free boundaries of the active regions are shown to be ( n - 1)-rectifiable (provided the supports of f and g have Lipschitz boundaries), and under some weak regularity assumptions on the geometry of the supports they are also locally semiconvex. Moreover, assuming f and g supported on two bounded strictly convex sets {{Ω,Λ subset mathbb {R}^n}} , and bounded away from zero and infinity on their respective supports, {C^{0,α}_loc} regularity of the optimal transport map and local C 1 regularity of the free boundaries away from {{Ω\\cap Λ}} are shown. Finally, the optimal transport map extends to a global homeomorphism between the active regions.