PTV-based IMPT optimization incorporating planning risk volumes vs robust optimization
Liu Wei; Li Xiaoqiang; Zhu, Ron. X.; Mohan, Radhe; Frank, Steven J.; Li Yupeng
2013-02-15
Purpose: Robust optimization leads to intensity-modulated proton therapy (IMPT) plans that are less sensitive to uncertainties and superior in terms of organs-at-risk (OARs) sparing, target dose coverage, and homogeneity compared to planning target volume (PTV)-based optimized plans. Robust optimization incorporates setup and range uncertainties, which implicitly adds margins to both targets and OARs and is also able to compensate for perturbations in dose distributions within targets and OARs caused by uncertainties. In contrast, the traditional PTV-based optimization considers only setup uncertainties and adds a margin only to targets but no margins to the OARs. It also ignores range uncertainty. The purpose of this work is to determine if robustly optimized plans are superior to PTV-based plans simply because the latter do not assign margins to OARs during optimization. Methods: The authors retrospectively selected from their institutional database five patients with head and neck (H and N) cancer and one with prostate cancer for this analysis. Using their original images and prescriptions, the authors created new IMPT plans using three methods: PTV-based optimization, optimization based on the PTV and planning risk volumes (PRVs) (i.e., 'PTV+PRV-based optimization'), and robust optimization using the 'worst-case' dose distribution. The PRVs were generated by uniformly expanding OARs by 3 mm for the H and N cases and 5 mm for the prostate case. The dose-volume histograms (DVHs) from the worst-case dose distributions were used to assess and compare plan quality. Families of DVHs for each uncertainty for all structures of interest were plotted along with the nominal DVHs. The width of the 'bands' of DVHs was used to quantify the plan sensitivity to uncertainty. Results: Compared with conventional PTV-based and PTV+PRV-based planning, robust optimization led to a smaller bandwidth for the targets in the face of uncertainties {l_brace}clinical target volume [CTV
Risk based treatment selection and optimization of contaminated site remediation
Heitzer, A.; Scholz, R.W.
1995-12-31
During the past few years numerous remediation technologies for the cleanup of contaminated sites have been developed. Because of the associated uncertainties concerning treatment reliability it is important to develop strategies to characterize their risks to achieve the cleanup requirements. For this purpose it is necessary to integrate existing knowledge on treatment efficacy and efficiency into the planning process for the management of contaminated sites. Based on field-scale experience data for the remediation of soils contaminated with petroleum hydrocarbons, two treatment technologies, biological land treatment and phyisco-chemical soil washing, were analyzed with respect to their general performance risks to achieve given cleanup standards. For a specific contamination scenario, efficient application ranges were identified using the method of linear optimization in combination with sensitivity analysis. Various constraints including cleanup standards, available financial budget, amount of contamination and others were taken into account. While land treatment was found to be most efficient at higher cleanup standards and less contaminated soils, soil washing exhibited better efficiency at lower cleanup standards and higher contaminated soils. These results compare favorably with practical experiences and indicate the utility of this approach to support decision making and planning processes for the general management of contaminated sites. In addition, the method allows for the simultaneous integration of various aspects such as risk based characteristics of treatment technologies, cleanup standards and more general ecological and economical remedial action objectives.
On the optimal risk based design of highway drainage structures
NASA Astrophysics Data System (ADS)
Tung, Y.-K.; Bao, Y.
1990-12-01
For a proposed highway bridge or culvert, the total cost to the public during its expected service life includes capital investment on the structures, regular operation and maintenance costs, and various flood related costs. The flood related damage costs include items such as replacement and repair costs of the highway bridge or culvert, flood plain property damage costs, users costs from traffic interruptions and detours, and others. As the design discharge increases, the required capital investment increases but the corresponding flood related damage costs decrease. Hydraulic design of a bridge or culvert using a riskbased approach is to choose among the alternatives the one associated with the least total expected cost. In this paper, the risk-based design procedure is applied to pipe culvert design. The effect of the hydrologic uncertainties such as sample size and type of flood distribution model on the optimal culvert design parameters including design return period and total expected cost are examined in this paper.
Risk-based Multiobjective Optimization Model for Bridge Maintenance Planning
Yang, I-T.; Hsu, Y.-S.
2010-05-21
Determining the optimal maintenance plan is essential for successful bridge management. The optimization objectives are defined in the forms of minimizing life-cycle cost and maximizing performance indicators. Previous bridge maintenance models assumed the process of bridge deterioration and the estimate of maintenance cost are deterministic, i.e., known with certainty. This assumption, however, is invalid especially with estimates over a long time horizon of bridge life. In this study, we consider the risks associated with bridge deterioration and maintenance cost in determining the optimal maintenance plan. The decisions variables include the strategic choice of essential maintenance (such as silane treatment and cathodic protection), and the intervals between periodic maintenance. A epsilon-constrained Particle Swarm Optimization algorithm is used to approximate the tradeoff between life-cycle cost and performance indicators. During stochastic search for optimal solutions, Monte-Carlo simulation is used to evaluate the impact of risks on the objective values, at an acceptance level of reliability. The proposed model can facilitate decision makers to select the compromised maintenance plan with a group of alternative choices, each of which leads to a different level of performance and life-cycle cost. A numerical example is used to illustrate the proposed model.
NASA Astrophysics Data System (ADS)
Enzenhöfer, R.; Geiges, A.; Nowak, W.
2011-12-01
Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill
Gu, Xiao-he; He, Chun-yang; Pan, Yao-zhong; Li, Xiao-bing; Zhu, Wen-quan; Zhu, Xiu-fang
2007-05-01
By the methods of remote sensing (RS) and geographic information system (GIS), and based on the estimations of degradation degree, risk degree and easy-restoration degree of degraded grasslands, an ecological management index (EMI) model of grassland was established to approach the practical ways of optimizing management of degraded grassland. A case study in the Xilin River Basin of Inner Mongolia showed that this model could quantitatively analyze the degradation degree, risk degree and easy-restoration degree of the grasslands under different optimizing management levels, which was of significance for applying rational measures with pertinence, and beneficial to the optimal allocation of resources during the management of degraded grassland. The EMI model could integrate most concerned information, which made it applicable widely.
Defining Optimal Health Range for Thyroid Function Based on the Risk of Cardiovascular Disease.
Chaker, Layal; Korevaar, Tim I M; Rizopoulos, Dimitris; Collet, Tinh-Hai; Völzke, Henry; Hofman, Albert; Rodondi, Nicolas; Cappola, Anne R; Peeters, Robin P; Franco, Oscar H
2017-08-01
Reference ranges of thyroid-stimulating hormone (TSH) and free thyroxine (FT4) are defined by their distribution in apparently healthy populations (2.5th and 97.5th percentiles), irrespective of disease risk, and are used as cutoffs for defining and clinically managing thyroid dysfunction. To provide proof of concept in defining optimal health ranges of thyroid function based on cardiovascular disease (CVD) mortality risk. In all, 9233 participants from the Rotterdam Study (mean age, 65.0 years) were followed up (median, 8.8 years) from baseline to date of death or end of follow-up period (2012), whichever came first (689 cases of CVD mortality). We calculated 10-year absolute risks of CVD mortality (defined according to the SCORE project) using a Fine and Gray competing risk model per percentiles of TSH and FT4, modeled nonlinearly and with sex and age adjustments. Overall, FT4 level >90th percentile was associated with a predicted 10-year CVD mortality risk >7.5% (P = 0.005). In men, FT4 level >97th percentile was associated with a risk of 10.8% (P < 0.001). In participants aged ≥65 years, absolute risk estimates were <10.0% below the 30th percentile (∼14.5 pmol/L or 1.10 ng/dL) and ≥15.0% above the 97th percentile of FT4 (∼22 pmol/L or 1.70 ng/dL). We describe absolute 10-year CVD mortality risks according to thyroid function (TSH and FT4) and suggest that optimal health ranges for thyroid function can be defined according to disease risk and are possibly sex and age dependent. These results need to be replicated with sufficient samples and representative populations.
Risk-based framework for optimizing residual chlorine in large water distribution systems.
Sharif, Muhammad Nadeem; Farahat, Ashraf; Haider, Husnain; Al-Zahrani, Muhammad A; Rodriguez, Manuel J; Sadiq, Rehan
2017-07-01
Managing residual chlorine in large water distribution systems (WDS) to minimize human health risk is a daunting task. In this research, a novel risk-based framework is developed and implemented in a distribution network spanning over 64 km(2) for supplying water to the city of Al-Khobar (Saudi Arabia) through 473-km-long water mains. The framework integrates the planning of linear assets (i.e., pipes) and placement of booster stations to optimize residual chlorine in the WDS. Failure mode and effect analysis are integrated with the fuzzy set theory to perform risk analysis. A vulnerability regarding the probability of failure of pipes is estimated from historical records of water main breaks. The consequence regarding residual chlorine availability has been associated with the exposed population depending on the land use characteristics (i.e., defined through zoning). EPANET simulations have been conducted to predict residual chlorine at each node of the network. A water quality index is used to assess the effectiveness of chlorine practice. Scenario analysis is also performed to evaluate the impact of changing locations and number of booster stations, and rehabilitation and/or replacement of vulnerable water mains. The results revealed that the proposed methodology could facilitate the utility managers to optimize residual chlorine effectively in large WDS.
A Risk-Based Multi-Objective Optimization Concept for Early-Warning Monitoring Networks
NASA Astrophysics Data System (ADS)
Bode, F.; Loschko, M.; Nowak, W.
2014-12-01
Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources which cannot be eliminated, especially in urban regions. As matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs.In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations and the early warning time and to minimize the installation and operating costs of the monitoring network. A qualitative risk ranking is used to prioritize the known risk sources for monitoring. The unknown risk sources can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well.We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks which are valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrade) to also cover moderate, tolerable and unknown risk sources. Monitoring networks which are valid for the remaining risk also cover all other risk sources but the early-warning time suffers.The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. To avoid numerical dispersion during the transport simulations we use the
Optimal Temporal Risk Assessment
Balci, Fuat; Freestone, David; Simen, Patrick; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
Time is an essential feature of most decisions, because the reward earned from decisions frequently depends on the temporal statistics of the environment (e.g., on whether decisions must be made under deadlines). Accordingly, evolution appears to have favored a mechanism that predicts intervals in the seconds to minutes range with high accuracy on average, but significant variability from trial to trial. Importantly, the subjective sense of time that results is sufficiently imprecise that maximizing rewards in decision-making can require substantial behavioral adjustments (e.g., accumulating less evidence for a decision in order to beat a deadline). Reward maximization in many daily decisions therefore requires optimal temporal risk assessment. Here, we review the temporal decision-making literature, conduct secondary analyses of relevant published datasets, and analyze the results of a new experiment. The paper is organized in three parts. In the first part, we review literature and analyze existing data suggesting that animals take account of their inherent behavioral variability (their “endogenous timing uncertainty”) in temporal decision-making. In the second part, we review literature that quantitatively demonstrates nearly optimal temporal risk assessment with sub-second and supra-second intervals using perceptual tasks (with humans and mice) and motor timing tasks (with humans). We supplement this section with original research that tested human and rat performance on a task that requires finding the optimal balance between two time-dependent quantities for reward maximization. This optimal balance in turn depends on the level of timing uncertainty. Corroborating the reviewed literature, humans and rats exhibited nearly optimal temporal risk assessment in this task. In the third section, we discuss the role of timing uncertainty in reward maximization in two-choice perceptual decision-making tasks and review literature that implicates timing uncertainty
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Risk based approach for design and optimization of stomach specific delivery of rifampicin.
Vora, Chintan; Patadia, Riddhish; Mittal, Karan; Mashru, Rajashree
2013-10-15
The research envisaged focuses on risk management approach for better recognizing the risks, ways to mitigate them and propose a control strategy for the development of rifampicin gastroretentive tablets. Risk assessment using failure mode and effects analysis (FMEA) was done to depict the effects of specific failure modes related to respective formulation/process variable. A Box-Behnken design was used to investigate the effect of amount of sodium bicarbonate (X1), pore former HPMC (X2) and glyceryl behenate (X3) on percent drug release at 1st hour (Q1), 4th hour (Q4), 8th hour (Q8) and floating lag time (min). Main effects and interaction plots were generated to study effects of variables. Selection of the optimized formulation was done using desirability function and overlay contour plots. The optimized formulation exhibited Q1 of 20.9%, Q4 of 59.1%, Q8 of 94.8% and floating lag time of 4.0 min. Akaike information criteria and Model selection criteria revealed that the model was best described by Korsmeyer-Peppas power law. The residual plots demonstrated no existence of non-normality, skewness or outliers. The composite desirability for optimized formulation computed using equations and software were 0.84 and 0.86 respectively. FTIR, DSC and PXRD studies ruled out drug polymer interaction due to thermal treatment. Copyright © 2013 Elsevier B.V. All rights reserved.
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.
NASA Astrophysics Data System (ADS)
Xu, Jun
Topic 1. An Optimization-Based Approach for Facility Energy Management with Uncertainties. Effective energy management for facilities is becoming increasingly important in view of the rising energy costs, the government mandate on the reduction of energy consumption, and the human comfort requirements. This part of dissertation presents a daily energy management formulation and the corresponding solution methodology for HVAC systems. The problem is to minimize the energy and demand costs through the control of HVAC units while satisfying human comfort, system dynamics, load limit constraints, and other requirements. The problem is difficult in view of the fact that the system is nonlinear, time-varying, building-dependent, and uncertain; and that the direct control of a large number of HVAC components is difficult. In this work, HVAC setpoints are the control variables developed on top of a Direct Digital Control (DDC) system. A method that combines Lagrangian relaxation, neural networks, stochastic dynamic programming, and heuristics is developed to predict the system dynamics and uncontrollable load, and to optimize the setpoints. Numerical testing and prototype implementation results show that our method can effectively reduce total costs, manage uncertainties, and shed the load, is computationally efficient. Furthermore, it is significantly better than existing methods. Topic 2. Power Portfolio Optimization in Deregulated Electricity Markets with Risk Management. In a deregulated electric power system, multiple markets of different time scales exist with various power supply instruments. A load serving entity (LSE) has multiple choices from these instruments to meet its load obligations. In view of the large amount of power involved, the complex market structure, risks in such volatile markets, stringent constraints to be satisfied, and the long time horizon, a power portfolio optimization problem is of critical importance but difficulty for an LSE to serve the
Risk modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-09-01
Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.
NASA Astrophysics Data System (ADS)
Zhang, N.; Ni, X. Y.; Huang, H.; Duarte, M.
2017-05-01
Knowledge on the characteristics of regional evacuation based on optimal information dissemination in hazardous gas leakage in metropolises plays a critical role. We established a risk analysis model for residents combining optimal emergency information dissemination and evacuation simulation in order to guide residents to make appropriate personal emergency response plan in hazardous gas leakage. The model was developed considering eight influencing factors, type and flow rate of hazardous gas, location of leakage source, wind speed and direction, information acquirement time, leakage duration, state of window (open/closed), and personal inhalation. Using Beijing as a case study, we calculated the risk of all grids and people and also obtained the three-dimensional special risk distribution. Through the microcosmic personal evacuation simulation in different condition, detailed data were obtained to analyze personal decision-making. We found that residents who stay near to the leakage source had better stay at home because of high concentration of hazardous leakage on their evacuation route. Instead of evacuation, staying at home and adopting optimal stay plan is very efficient if residents can receive the emergency information before the hazardous gas totally dispersed. For people who lived far from leakage source, evacuation is usually a good choice because they have longer time to avoid high-concentration hazardous gas.
Safety optimization through risk management
NASA Astrophysics Data System (ADS)
Wright, K.; Peltonen, P.
The paper discusses the overall process of system safety optimization in the space program environment and addresses in particular methods that enhance the efficiency of this activity. Effective system safety optimization is achieved by concentrating the available engineering and safety assurance resouces on the main risk contributors. The qualitative risk contributor identification by means of the hazard analyses and the FMECA constitute the basis for the system safety process. The risk contributors are ranked firstly on a qualitative basis according to the consequence severities. This ranking is then refined by mishap propagation/recovery time considerations and by probabilistic means (PRA). Finally, in order to broaden and extend the use of risk contributor ranking as a managerial tool in project resource assignment, quality, manufacturing and operations related critical characteristics, i.e. risk influencing factors, are identified for managerial visibility.
A risk-based coverage model for video surveillance camera control optimization
NASA Astrophysics Data System (ADS)
Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua
2015-12-01
Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.
NASA Astrophysics Data System (ADS)
DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.
2012-06-01
. The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object characterization, jointly, in computing reward and cost for optimizing tasking decisions.
Risk and utility in portfolio optimization
NASA Astrophysics Data System (ADS)
Cohen, Morrel H.; Natoli, Vincent D.
2003-06-01
Modern portfolio theory (MPT) addresses the problem of determining the optimum allocation of investment resources among a set of candidate assets. In the original mean-variance approach of Markowitz, volatility is taken as a proxy for risk, conflating uncertainty with risk. There have been many subsequent attempts to alleviate that weakness which, typically, combine utility and risk. We present here a modification of MPT based on the inclusion of separate risk and utility criteria. We define risk as the probability of failure to meet a pre-established investment goal. We define utility as the expectation of a utility function with positive and decreasing marginal value as a function of yield. The emphasis throughout is on long investment horizons for which risk-free assets do not exist. Analytic results are presented for a Gaussian probability distribution. Risk-utility relations are explored via empirical stock-price data, and an illustrative portfolio is optimized using the empirical data.
NASA Astrophysics Data System (ADS)
Haro-Monteagudo, David; Solera, Abel; Andreu, Joaquín
2017-01-01
Droughts are a major threat to water resources systems management. Timely anticipation results crucial to defining strategies and measures to minimise their effects. Water managers make use of monitoring systems in order to characterise and assess drought risk by means of indices and indicators. However, there are few systems currently in operation that are capable of providing early warning with regard to the occurrence of a drought episode. This paper proposes a novel methodology to support and complement drought monitoring and early warning in regulated water resources systems. It is based in the combined use of two models, a water resources optimization model and a stochastic streamflow generation model, to generate a series of results that allow evaluating the future state of the system. The results for the period 1998-2009 in the Jucar River Basin (Spain) show that accounting for scenario change risk can be beneficial for basin managers by providing them with information on the current and future drought situation at any given moment. Our results show that the combination of scenario change probabilities with the current drought monitoring system can represent a major advance towards improved drought management in the future, and add a significant value to the existing national State Index (SI) approach for early warning purposes.
Preoperative optimization and risk assessment.
Nicholas, Joseph A
2014-05-01
Because most older adults with hip fractures require urgent surgical intervention, the preoperative medical evaluation focuses on the exclusion of the small number of contraindications to surgery, and rapid optimization of patients for operative repair. Although many geriatric fracture patients have significant chronic medical comorbidities, most patients can be safely stabilized for surgery with medical and orthopedic comanagement by anticipating a small number of common physiologic responses and perioperative complications. In addition to estimating perioperative risk, the team should focus on intravascular volume restoration, pain control, and avoidance of perioperative hypotension. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimizing Processes to Minimize Risk
NASA Technical Reports Server (NTRS)
Loyd, David
2017-01-01
NASA, like the other hazardous industries, has suffered very catastrophic losses. Human error will likely never be completely eliminated as a factor in our failures. When you can't eliminate risk, focus on mitigating the worst consequences and recovering operations. Bolstering processes to emphasize the role of integration and problem solving is key to success. Building an effective Safety Culture bolsters skill-based performance that minimizes risk and encourages successful engagement.
Copenhaver, Michael M.; Lee, I-Ching; Margolin, Arthur; Bruce, Robert D.; Altice, Frederick L.
2011-01-01
We conducted a preliminary study of the 4 session Community-Friendly Health Recovery Program for HIV-infected drug users (CHRP+) which was adapted from a 12 session evidence-based risk reduction and antiretroviral adherence intervention. Improvements were found in the behavioral skills required to properly adhere to HIV medication regimens. Enhancements were found in all measured aspects of sex-risk reduction outcomes including HIV knowledge, motivation to reduce sex-risk behavior, behavioral skills related to engaging in reduced sexual risk, and reduced risk behavior. Improvements in drug use outcomes included enhancements in risk reduction skills as well as reduced heroin and cocaine use. Intervention effects also showed durability from Post-intervention to the Follow-up assessment point. Females responded particularly well in terms of improvements in risk reduction skills and risk behavior. This study suggests that an evidence-based behavioral intervention may be successfully adapted for use in community-based clinical settings where HIV-infected drug users can be more efficiently reached. PMID:21302180
NASA Astrophysics Data System (ADS)
Yamout, Ghina M.; Hatfield, Kirk; Romeijn, H. Edwin
2007-07-01
The paper studies the effect of incorporating the conditional value-at-risk (CVaRα) in analyzing a water allocation problem versus using the frequently used expected value, two-stage modeling, scenario analysis, and linear optimization tools. Five models are developed to examine water resource allocation when available supplies are uncertain: (1) a deterministic expected value model, (2) a scenario analysis model, (3) a two-stage stochastic model with recourse, (4) a CVaRα objective function model, and (5) a CVaRα constraint model. The models are applied over a region of east central Florida. Results show the deterministic expected value model underestimates system costs and water shortage. Furthermore, the expected value model produces identical cost estimates for different standard deviations distributions of water supplies with identical mean. From the scenario analysis model it is again demonstrated that the expected value of results taken from many scenarios underestimates costs and water shortages. Using a two-stage stochastic mixed integer formulation with recourse permits an improved representation of uncertainties and real-life decision making which in turn predicts higher costs. The inclusion of CVaRα objective function in the latter provides for the optimization and control of high-risk events. Minimizing CVaRα does not, however, permit control of lower-risk events. Constraining CVaRα while minimizing cost, on the other hand, allows for the control of high-risk events while minimizing the costs of all events. Results show CVaRα exhibits continuous and consistent behavior with respect to the confidence level α, when compared to value-at-risk (VaRα).
Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L; Date, Hiroyuki
2015-11-01
Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose-volume histograms for tumor and normal tissues of organs around the tumor. Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose-volume histogram. It was found that the optimization of fractionation scheme incorporating the dose-volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8-32 fractions with a daily dose of 2.2-6.3 Gy. It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose-volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.
Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L.; Date, Hiroyuki
2015-11-15
Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.
RNA based evolutionary optimization
NASA Astrophysics Data System (ADS)
Schuster, Peter
1993-12-01
. Evolutionary optimization of two-letter sequences in thus more difficult than optimization in the world of natural RNA sequences with four bases. This fact might explain the usage of four bases in the genetic language of nature. Finally we study the mapping from RNA sequences into secondary structures and explore the topology of RNA shape space. We find that ‘neutral paths’ connecting neighbouring sequences with identical structures go very frequently through entire sequence space. Sequences folding into common structures are found everywhere in sequence space. Hence, evolution can migrate to almost every part of sequence space without ‘hill climbing’ and only small fractions of the entire number of sequences have to be searched in order to find suitable structures.
Risk Analysis for Resource Planning Optimization
NASA Technical Reports Server (NTRS)
Chueng, Kar-Ming
2008-01-01
The main purpose of this paper is to introduce a risk management approach that allows planners to quantify the risk and efficiency tradeoff in the presence of uncertainties, and to make forward-looking choices in the development and execution of the plan. Demonstrate a planning and risk analysis framework that tightly integrates mathematical optimization, empirical simulation, and theoretical analysis techniques to solve complex problems.
NASA Astrophysics Data System (ADS)
Zhu, T.; Cai, X.
2013-12-01
Delay in onset of Indian summer monsoon becomes increasingly frequent. Delayed monsoon and occasional monsoon failures seriously affect agricultural production in the northeast as well as other parts of India. In the Vaishali district of the Bihar State, Monsoon rainfall is very skewed and erratic, often concentrating in shorter durations. Farmers in Vaishali reported that delayed Monsoon affected paddy planting and, consequently delayed cropping cycle, putting crops under the risks of 'terminal heat.' Canal system in the district does not function due to lack of maintenance; irrigation relies almost entirely on groundwater. Many small farmers choose not to irrigate when monsoon onset is delayed due to high diesel price, leading to reduced production or even crop failure. Some farmers adapt to delayed onset of Monsoon by planting short-duration rice, which gives the flexibility for planting the next season crops. Other sporadic autonomous adaptation activities were observed as well, with various levels of success. Adaptation recommendations and effective policy interventions are much needed. To explore robust options to adapt to the changing Monsoon regime, we build a stochastic programming model to optimize revenues of farmer groups categorized by landholding size, subject to stochastic Monsoon onset and rainfall amount. Imperfect probabilistic long-range forecast is used to inform the model onset and rainfall amount probabilities; the 'skill' of the forecasting is measured using probabilities of correctly predicting events in the past derived through hindcasting. Crop production functions are determined using self-calibrating Positive Mathematical Programming approach. The stochastic programming model aims to emulate decision-making behaviors of representative farmer agents through making choices in adaptation, including crop mix, planting dates, irrigation, and use of weather information. A set of technological and policy intervention scenarios are tested
Wheeler, Ward C
2003-08-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Optimal security investments and extreme risk.
Mohtadi, Hamid; Agiwal, Swati
2012-08-01
In the aftermath of 9/11, concern over security increased dramatically in both the public and the private sector. Yet, no clear algorithm exists to inform firms on the amount and the timing of security investments to mitigate the impact of catastrophic risks. The goal of this article is to devise an optimum investment strategy for firms to mitigate exposure to catastrophic risks, focusing on how much to invest and when to invest. The latter question addresses the issue of whether postponing a risk mitigating decision is an optimal strategy or not. Accordingly, we develop and estimate both a one-period model and a multiperiod model within the framework of extreme value theory (EVT). We calibrate these models using probability measures for catastrophic terrorism risks associated with attacks on the food sector. We then compare our findings with the purchase of catastrophic risk insurance. © 2012 Society for Risk Analysis.
Spalding, Aaron C.; Jee, Kyung-Wook; Vineberg, Karen; Jablonowski, Marla; Fraass, Benedick A.; Pan, Charlie C.; Lawrence, Theodore S.; Ten Haken, Randall K.; Ben-Josef, Edgar
2007-02-15
Radiotherapy for pancreatic cancer is limited by the tolerance of local organs at risk (OARs) and frequent overlap of the planning target volume (PTV) and OAR volumes. Using lexicographic ordering (LO), a hierarchical optimization technique, with generalized equivalent uniform dose (gEUD) cost functions, we studied the potential of intensity modulated radiation therapy (IMRT) to increase the dose to pancreatic tumors and to areas of vascular involvement that preclude surgical resection [surgical boost volume (SBV)]. We compared 15 forward planned three-dimensional conformal (3DCRT) and IMRT treatment plans for locally advanced unresectable pancreatic cancer. We created IMRT plans optimized using LO with gEUD-based cost functions that account for the contribution of each part of the resulting inhomogeneous dose distribution. LO-IMRT plans allowed substantial PTV dose escalation compared with 3DCRT; median increase from 52 Gy to 66 Gy (a=-5,p<0.005) and median increase from 50 Gy to 59 Gy (a=-15,p<0.005). LO-IMRT also allowed increases to 85 Gy in the SBV, regardless of a value, along with significant dose reductions in OARs. We conclude that LO-IMRT with gEUD cost functions could allow dose escalation in pancreas tumors with concomitant reduction in doses to organs at risk as compared with traditional 3DCRT.
assigned to the operational support airlift mission, located at Andrews Air Force Base, Maryland and Scott Air Force Base, Illinois. The missions flown... Scott and Andrews AFB is the optimal assignment. If nine total assets were optimized, five would be assigned to Scott AFB and four to Andrews AFB
Vassalli, Giuseppe; Klersy, Catherine; De Servi, Stefano; Galatius, Soeren; Erne, Paul; Eberli, Franz; Rickli, Hans; Hornig, Burkhard; Bertel, Osmund; Bonetti, Piero; Moccetti, Tiziano; Kaiser, Christoph; Pfisterer, Matthias; Pedrazzini, Giovanni
2016-03-01
The randomized BASKET-PROVE study showed no significant differences between sirolimus-eluting stents (SES), everolimus-eluting stents (EES), and bare-metal stents (BMS) with respect to the primary end point, rates of death from cardiac causes, or myocardial infarction (MI) at 2 years of follow-up, in patients requiring stenting of a large coronary artery. Clinical risk factors may affect clinical outcomes after percutaneous coronary interventions. We present a retrospective analysis of the BASKET-PROVE data addressing the question as to whether the optimal type of stent can be predicted based on a cumulative clinical risk score. A total of 2,314 patients (mean age 66 years) who underwent coronary angioplasty and implantation of ≥1 stents that were ≥3.0 mm in diameter were randomly assigned to receive SES, EES, or BMS. A cumulative clinical risk score was derived using a Cox model that included age, gender, cardiovascular risk factors (hypercholesterolemia, hypertension, family history of cardiovascular disease, diabetes, smoking), presence of ≥2 comorbidities (stroke, peripheral artery disease, chronic kidney disease, chronic rheumatic disease), a history of MI or coronary revascularization, and clinical presentation (stable angina, unstable angina, ST-segment elevation MI). An aggregate drug-eluting stent (DES) group (n = 1,549) comprising 775 patients receiving SES and 774 patients receiving EES was compared to 765 patients receiving BMS. Rates of death from cardiac causes or nonfatal MI at 2 years of follow-up were significantly increased in patients who were in the high tertile of risk stratification for the clinical risk score compared to those who were in the aggregate low-mid tertiles. In patients with a high clinical risk score, rates of death from cardiac causes or nonfatal MI were lower in patients receiving DES (2.4 per 100 person-years, 95% CI 1.6-3.6) compared with BMS (5.5 per 100 person-years, 95% CI 3.7-8.2, hazard ratio 0.45, 95% CI 0
Risk Assessment: Evidence Base
NASA Technical Reports Server (NTRS)
Johnson-Throop, Kathy A.
2007-01-01
Human systems PRA (Probabilistic Risk Assessment: a) Provides quantitative measures of probability, consequence, and uncertainty; and b) Communicates risk and informs decision-making. Human health risks rated highest in ISS PRA are based on 1997 assessment of clinical events in analog operational settings. Much work remains to analyze remaining human health risks identified in Bioastronautics Roadmap.
Medical Device Risk Management For Performance Assurance Optimization and Prioritization.
Gaamangwe, Tidimogo; Babbar, Vishvek; Krivoy, Agustina; Moore, Michael; Kresta, Petr
2015-01-01
Performance assurance (PA) is an integral component of clinical engineering medical device risk management. For that reason, the clinical engineering (CE) community has made concerted efforts to define appropriate risk factors and develop quantitative risk models for efficient data processing and improved PA program operational decision making. However, a common framework that relates the various processes of a quantitative risk system does not exist. This article provides a perspective that focuses on medical device quality and risk-based elements of the PA program, which include device inclusion/exclusion, schedule optimization, and inspection prioritization. A PA risk management framework is provided, and previous quantitative models that have contributed to the advancement of PA risk management are examined. A general model for quantitative risk systems is proposed, and further perspective on possible future directions in the area of PA technology is also provided.
Risk perception, fuzzy representations and comparative optimism.
Brown, Stephen L; Morley, Andy M
2007-11-01
Rather than a unitary value, individuals may represent health risk as a fuzzy entity that permits them to make a number of specific possible estimates. Comparative optimism might be explained by people flexibly, using such a set to derive optimistic risk estimates. Student participants were asked to rate the likelihood of eight harmful alcohol-related outcomes occurring to themselves and to an average student. Participants made either unitary estimates or estimates representing the upper and lower bounds of a set denoting 'realistic probability' estimates. Personal risk estimates were lower when they were made as unitary estimates than those calculated from the mid-points of the bounded estimates. Unitary estimates of personal risk made after the bounded estimates were lower than initial unitary estimates. There were no effects for estimates made with regard to the average student. Risk may be internally represented as a fuzzy set, and comparative optimism may exist partly because this set allows people the opportunity to make optimistic unitary estimates for personal risk within what they see as realistic parameters.
Risk perceptions, optimism, and natural hazards.
Smith, V Kerry
2008-12-01
This article uses the panel survey developed for the Health and Retirement Study to evaluate whether Hurricane Andrew in 1992 altered longevity expectations of respondents who lived in Dade County, Florida, the location experiencing the majority of about 20 billion dollars of damage. Longevity expectations have been used as a proxy measure for both individual subjective risk assessments and dispositional optimism. The panel structure allows comparison of those respondents' longevity assessments when the timing of their survey responses bracket Andrew with those of individuals where it does not. After controlling for health effects, the results indicate a significant reduction in longevity expectations due to the information respondents appear to have associated with the storm.
Copenhaver, Michael M; Lee, I-Ching; Margolin, Arthur; Bruce, Robert D; Altice, Frederick L
2011-01-01
The authors conducted a preliminary study of the 4-session Holistic Health for HIV (3H+), which was adapted from a 12-session evidence-based risk reduction and antiretroviral adherence intervention. Improvements were found in the behavioral skills required to properly adhere to HIV medication regimens. Enhancements were found in all measured aspects of sex-risk reduction outcomes, including HIV knowledge, motivation to reduce sex-risk behavior, behavioral skills related to engaging in reduced sexual risk, and reduced risk behavior. Improvements in drug use outcomes included enhancements in risk reduction skills as well as reduced heroin and cocaine use. Intervention effects also showed durability from post-intervention to the follow-up assessment point. Females responded particularly well in terms of improvements in risk reduction skills and risk behavior. This study suggests that an evidence-based behavioral intervention may be successfully adapted for use in community-based clinical settings where HIV-infected drug users can be more efficiently reached.
Research on optimization-based design
NASA Technical Reports Server (NTRS)
Balling, R. J.; Parkinson, A. R.; Free, J. C.
1989-01-01
Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.
Risk Analysis for Resource Planning Optimization
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming
2008-01-01
This paper describes a systems engineering approach to resource planning by integrating mathematical modeling and constrained optimization, empirical simulation, and theoretical analysis techniques to generate an optimal task plan in the presence of uncertainties.
Risk Analysis for Resource Planning Optimization
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming
2008-01-01
This paper describes a systems engineering approach to resource planning by integrating mathematical modeling and constrained optimization, empirical simulation, and theoretical analysis techniques to generate an optimal task plan in the presence of uncertainties.
Short-term Reservoir Optimization by Stochastic Optimization for Mitigation Downstream Flood Risks
NASA Astrophysics Data System (ADS)
Schwanenberg, Dirk; Assis Dos Reis, Alberto; Kuwajima, Julio; Alvarado Montero, Rodolfo; Mainardi Fan, Fernando
2014-05-01
An important objective of the operation of multi-purpose reservoirs is the mitigation of flood risks in downstream river reaches. Under the assumptions of reservoirs with finite storage capacities, a key factor for its effective use during flood events is the proper timing of detention measures under consideration of forecast uncertainty. Operational flow forecasting systems support this task by providing deterministic or probabilistic inflow forecasts and decision support components for assessing optimum release strategies. We focus on the decision support component and propose a deterministic optimization and its extension to stochastic optimization procedures based on the non-adaptive Sample Average Approximation (SAA) approach and an adaptive multi-stage stochastic optimization with underlying scenario trees. These techniques are used to compute release trajectories of the reservoirs over a finite forecast horizon of up to 14 days by integrating a nonlinear gradient-based optimization algorithm and a model of the water system. The latter consists of simulation components for pool routing and kinematic or diffusive wave models for the downstream river reaches including a simulation mode and a reverse adjoint mode for the efficient computation of first-order derivatives. The framework has been implemented for a reservoir system operated by the Brazilian Companhia Energética de Minas Gerais S.A. (CEMIG). We present results obtained for the operation of the Três Marias reservoir in the Brazilian state of Minas Gerais with a catchment area of near 55,000 km2, an installed capacity of 396 MW and operation restrictions due to downstream flood risk. The focus of our discussion is the impact of sparsely available ground data, forecast uncertainty and its consideration in the optimization procedure. We compare the performance of the above mentioned optimization techniques and conclude the superiority of the stochastic methods.
Discrete Variables Function Optimization Using Accelerated Biogeography-Based Optimization
NASA Astrophysics Data System (ADS)
Lohokare, M. R.; Pattnaik, S. S.; Devi, S.; Panigrahi, B. K.; Das, S.; Jadhav, D. G.
Biogeography-Based Optimization (BBO) is a bio-inspired and population based optimization algorithm. This is mainly formulated to optimize functions of discrete variables. But the convergence of BBO to the optimum value is slow as it lacks in exploration ability. The proposed Accelerated Biogeography-Based Optimization (ABBO) technique is an improved version of BBO. In this paper, authors accelerated the original BBO to enhance the exploitation and exploration ability by modified mutation operator and clear duplicate operator. This significantly improves the convergence characteristics of the original algorithm. To validate the performance of ABBO, experiments have been conducted on unimodal and multimodal benchmark functions of discrete variables. The results shows excellent performance when compared with other modified BBOs and other optimization techniques like stud genetic algorithm (SGA) and ant colony optimization (ACO). The results are also analyzed by using two paired t- test.
Robust optimization based upon statistical theory.
Sobotta, B; Söhn, M; Alber, M
2010-08-01
Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
Mubayi, V.
1995-05-01
The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ``long term interdiction limit,`` is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission.
Towards Risk Based Design for NASA's Missions
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Barrientos, Francesca; Meshkat, Leila
2004-01-01
This paper describes the concept of Risk Based Design in the context of NASA s low volume, high cost missions. The concept of accounting for risk in the design lifecycle has been discussed and proposed under several research topics, including reliability, risk analysis, optimization, uncertainty, decision-based design, and robust design. This work aims to identify and develop methods to enable and automate a means to characterize and optimize risk, and use risk as a tradeable resource to make robust and reliable decisions, in the context of the uncertain and ambiguous stage of early conceptual design. This paper first presents a survey of the related topics explored in the design research community as they relate to risk based design. Then, a summary of the topics from the NASA-led Risk Colloquium is presented, followed by current efforts within NASA to account for risk in early design. Finally, a list of "risk elements", identified for early-phase conceptual design at NASA, is presented. The purpose is to lay the foundation and develop a roadmap for future work and collaborations for research to eliminate and mitigate these risk elements in early phase design.
Optimal Allocation for the Estimation of Attributable Risk,
control studies . Various optimal strategies are examined using alternative exposure-specific disease rates. Odd Ratio, Relative Risk and Attributable Risk....This paper derives an expression for the optimum sampling allocation under the minimum variance criterion of the estimated attributable risk for case
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.
1996-12-01
This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.
Risk-optimized proton therapy to minimize radiogenic second cancers
NASA Astrophysics Data System (ADS)
Rechner, Laura A.; Eley, John G.; Howell, Rebecca M.; Zhang, Rui; Mirkovic, Dragan; Newhauser, Wayne D.
2015-05-01
Proton therapy confers substantially lower predicted risk of second cancer compared with photon therapy. However, no previous studies have used an algorithmic approach to optimize beam angle or fluence-modulation for proton therapy to minimize those risks. The objectives of this study were to demonstrate the feasibility of risk-optimized proton therapy and to determine the combination of beam angles and fluence weights that minimizes the risk of second cancer in the bladder and rectum for a prostate cancer patient. We used 6 risk models to predict excess relative risk of second cancer. Treatment planning utilized a combination of a commercial treatment planning system and an in-house risk-optimization algorithm. When normal-tissue dose constraints were incorporated in treatment planning, the risk model that incorporated the effects of fractionation, initiation, inactivation, repopulation and promotion selected a combination of anterior and lateral beams, which lowered the relative risk by 21% for the bladder and 30% for the rectum compared to the lateral-opposed beam arrangement. Other results were found for other risk models.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed
Economics-based optimization of unstable flows
NASA Astrophysics Data System (ADS)
Huberman, B. A.; Helbing, D.
1999-07-01
As an example for the optimization of unstable flows, we present an economics-based method for deciding the optimal rates at which vehicles are allowed to enter a highway. It exploits the naturally occurring fluctuations of traffic flow and is flexible enough to adapt in real time to the transient flow characteristics of road traffic. Simulations based on realistic parameter values show that this strategy is feasible for naturally occurring traffic, and that even far from optimality, injection policies can improve traffic flow. Moreover, the same method can be applied to the optimization of flows of gases and granular media.
Algorithmic Differentiation for Calculus-based Optimization
NASA Astrophysics Data System (ADS)
Walther, Andrea
2010-10-01
For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.
Anticoagulation in the older adult: optimizing benefit and reducing risk.
Ko, Darae; Hylek, Elaine M
2014-09-01
The risk for both arterial and venous thrombosis increases with age. Despite the increasing burden of strokes related to atrial fibrillation (AF) and venous thromboembolism (VTE) among older adults, the use of anticoagulant therapy is limited in this population due to the parallel increase in risk of serious hemorrhage. Understanding the risks and their underlying mechanisms would help to mitigate adverse events and improve persistence with these life-saving therapies. The objectives of this review are to: (1) elucidate the age-related physiologic changes that render this high risk subgroup susceptible to hemorrhage, (2) identify mutable risk factors and hazards contributing to an increased bleeding risk in older individuals, and (3) discuss interventions to optimize anticoagulation therapy in this population.
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.
1974-01-01
General classes of nonlinear and linear transformations were investigated for the reduction of the dimensionality of the classification (feature) space so that, for a prescribed dimension m of this space, the increase of the misclassification risk is minimized.
Caparros-Midwood, Daniel; Barr, Stuart; Dawson, Richard
2017-02-23
Future development in cities needs to manage increasing populations, climate-related risks, and sustainable development objectives such as reducing greenhouse gas emissions. Planners therefore face a challenge of multidimensional, spatial optimization in order to balance potential tradeoffs and maximize synergies between risks and other objectives. To address this, a spatial optimization framework has been developed. This uses a spatially implemented genetic algorithm to generate a set of Pareto-optimal results that provide planners with the best set of trade-off spatial plans for six risk and sustainability objectives: (i) minimize heat risks, (ii) minimize flooding risks, (iii) minimize transport travel costs to minimize associated emissions, (iv) maximize brownfield development, (v) minimize urban sprawl, and (vi) prevent development of greenspace. The framework is applied to Greater London (U.K.) and shown to generate spatial development strategies that are optimal for specific objectives and differ significantly from the existing development strategies. In addition, the analysis reveals tradeoffs between different risks as well as between risk and sustainability objectives. While increases in heat or flood risk can be avoided, there are no strategies that do not increase at least one of these. Tradeoffs between risk and other sustainability objectives can be more severe, for example, minimizing heat risk is only possible if future development is allowed to sprawl significantly. The results highlight the importance of spatial structure in modulating risks and other sustainability objectives. However, not all planning objectives are suited to quantified optimization and so the results should form part of an evidence base to improve the delivery of risk and sustainability management in future urban development.
Risk-Reliability Programming for Optimal Water Quality Control
NASA Astrophysics Data System (ADS)
Simonovic, Slobodan P.; Orlob, Gerald T.
1984-06-01
A risk-reliability programming approach is developed for optimal allocation of releases for control of water quality downstream of a multipurpose reservoir. Additionally, the approach allows the evaluation of optimal risk/reliability values. Risk is defined as a probability of not satisfying constraints given in probabilistic form, e.g., encroachment of water quality reservation on that for flood control. The objective function includes agricultural production losses that are functions of water quality, and risk-losses associated with encroachment of the water quality control functions on reservations for flood control, fisheries, and irrigation. The approach is demonstrated using data from New Melones Reservoir on the Stanislaus River in California. Results indicate that an optimum water quality reservation exists for a given set of quality targets and loss functions. Additional analysis is presented to determine the sensitivity of optimization results to agricultural production loss functions and the influence of statistically different river flows on the optimal reservoir storage for water quality control. Results indicate the dependence of an optimum water quality reservation on agricultural production losses and hydrologic conditions.
Optimal risk management of human alveolar echinococcosis with vermifuge.
Kato, Naoto; Kotani, Koji; Ueno, Seiya; Matsuda, Hiroyuki
2010-12-07
In this study, we develop a bioeconomic model of human alveolar echinococcosis (HAE) and formulate the optimal strategies for managing the infection risks in humans by applying optimal control theory. The model has the following novel features: (i) the complex transmission cycle of HAE has been tractably incorporated into the framework of optimal control problems and (ii) the volume of vermifuge spreading to manage the risk is considered a control variable. With this model, we first obtain the stability conditions for the transmission dynamics under the condition of constant control. Second, we explicitly introduce a control variable of vermifuge spreading into the analysis by considering the associated control costs. In this optimal control problem, we have successfully derived a set of conditions for a bang-bang control and singular control, which are mainly characterized by the prevalence of infection in voles and foxes and the remaining time of control. The analytical results are demonstrated by numerical analysis and we discuss the effects of the parameter values on the optimal strategy and the transmission cycle. We find that when the prevalence of infection in foxes is low and the prevalence of infection in voles is sufficiently high, the optimal strategy is to expend no effort in vermifuge spreading. Copyright © 2010 Elsevier Ltd. All rights reserved.
Risk based management of piping systems
Conley, M.J.; Aller, J.E.; Tallin, A.; Weber, B.J.
1996-07-01
The API Piping Inspection Code is the first such Code to require classification of piping based on the consequences of failure, and to use this classification to influence inspection activity. Since this Code was published, progress has been made in the development of tools to improve on this approach by determining not only the consequences of failure, but also the likelihood of failure. ``Risk`` is defined as the product of the consequence and the likelihood. Measuring risk provides the means to formally manage risk by matching the inspection effort (costs) to the benefits of reduced risk. Using such a cost/benefit analysis allows the optimization of inspection budgets while meeting societal demands for reduction of the risk associated with process plant piping. This paper presents an overview of the tools developed to measure risk, and the methods to determine the effects of past and future inspections on the level of risk. The methodology is being developed as an industry-sponsored project under the direction of an API committee. The intent is to develop an API Recommended Practice that will be linked to In-Service Inspection Standards and the emerging Fitness for Service procedures. Actual studies using a similar approach have shown that a very high percentage of the risk due to piping in an operating facility is associated with relatively few pieces of piping. This permits inspection efforts to be focused on those piping systems that will result in the greatest risk reduction.
Quantifying fatigue risk in model-based fatigue risk management.
Rangan, Suresh; Van Dongen, Hans P A
2013-02-01
The question of what is a maximally acceptable level of fatigue risk is hotly debated in model-based fatigue risk management in commercial aviation and other transportation modes. A quantitative approach to addressing this issue, referred to by the Federal Aviation Administration with regard to its final rule for commercial aviation "Flightcrew Member Duty and Rest Requirements," is to compare predictions from a mathematical fatigue model against a fatigue threshold. While this accounts for duty time spent at elevated fatigue risk, it does not account for the degree of fatigue risk and may, therefore, result in misleading schedule assessments. We propose an alternative approach based on the first-order approximation that fatigue risk is proportional to both the duty time spent below the fatigue threshold and the distance of the fatigue predictions to the threshold--that is, the area under the curve (AUC). The AUC approach is straightforward to implement for schedule assessments in commercial aviation and also provides a useful fatigue metric for evaluating thousands of scheduling options in industrial schedule optimization tools.
Particle swarm optimization based space debris surveillance network scheduling
NASA Astrophysics Data System (ADS)
Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao
2017-02-01
The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.
Optimization of multi-constrained structures based on optimality criteria
NASA Technical Reports Server (NTRS)
Rizzi, P.
1976-01-01
A weight-reduction algorithm is developed for the optimal design of structures subject to several multibehavioral inequality constraints. The structural weight is considered to depend linearly on the design variables. The algorithm incorporates a simple recursion formula derived from the Kuhn-Tucker necessary conditions for optimality, associated with a procedure to delete nonactive constraints based on the Gauss-Seidel iterative method for linear systems. A number of example problems is studied, including typical truss structures and simplified wings subject to static loads and with constraints imposed on stresses and displacements. For one of the latter structures, constraints on the fundamental natural frequency and flutter speed are also imposed. The results obtained show that the method is fast, efficient, and general when compared to other competing techniques. Extensions to the generality of the method to include equality constraints and nonlinear merit functions is discussed.
Risk-based decisionmaking (Panel)
Smith, T.H.
1995-12-31
By means of a panel discussion and extensive audience interaction, explore the current challenges and progress to date in applying risk considerations to decisionmaking related to low-level waste. This topic is especially timely because of the proposed legislation pertaining to risk-based decisionmaking and because of the increased emphasis placed on radiological performance assessments of low-level waste disposal.
Chaparian, A.; Kanani, A.; Baghbanian, M.
2014-01-01
The objectives of this paper were calculation and comparison of the effective doses, the risks of exposure-induced cancer, and dose reduction in the gonads for male and female patients in different projections of some X-ray examinations. Radiographies of lumbar spine [in the eight projections of anteroposterior (AP), posteroanterior (PA), right lateral (RLAT), left lateral (LLAT), right anterior-posterior oblique (RAO), left anterior-posterior oblique (LAO), right posterior-anterior oblique (RPO), and left posterior-anterior oblique (LPO)], abdomen (in the two projections of AP and PA), and pelvis (in the two projections of AP and PA) were investigated. A solid-state dosimeter was used for the measuring of the entrance skin exposure. A Monte Carlo program was used for calculation of effective doses, the risks of radiation-induced cancer, and doses to the gonads related to the different projections. Results of this study showed that PA projection of abdomen, lumbar spine, and pelvis radiographies caused 50%-57% lower effective doses than AP projection and 50%-60% reduction in radiation risks. Also use of LAO projection of lumbar spine X-ray examination caused 53% lower effective dose than RPO projection and 56% and 63% reduction in radiation risk for male and female, respectively, and RAO projection caused 28% lower effective dose than LPO projection and 52% and 39% reduction in radiation risk for males and females, respectively. About dose reduction in the gonads, using of the PA position rather than AP in the radiographies of the abdomen, lumbar spine, and pelvis can result in reduction of the ovaries doses in women, 38%, 31%, and 25%, respectively and reduction of the testicles doses in males, 76%, 86%, and 94%, respectively. Also for oblique projections of lumbar spine X-ray examination, with employment of LAO rather than RPO and also RAO rather than LPO, demonstrated 22% and 13% reductions to the ovaries doses and 66% and 54% reductions in the testicles doses
Optimal Hops-Based Adaptive Clustering Algorithm
NASA Astrophysics Data System (ADS)
Xuan, Xin; Chen, Jian; Zhen, Shanshan; Kuo, Yonghong
This paper proposes an optimal hops-based adaptive clustering algorithm (OHACA). The algorithm sets an energy selection threshold before the cluster forms so that the nodes with less energy are more likely to go to sleep immediately. In setup phase, OHACA introduces an adaptive mechanism to adjust cluster head and load balance. And the optimal distance theory is applied to discover the practical optimal routing path to minimize the total energy for transmission. Simulation results show that OHACA prolongs the life of network, improves utilizing rate and transmits more data because of energy balance.
Optimal Antiviral Switching to Minimize Resistance Risk in HIV Therapy
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2011-01-01
The development of resistant strains of HIV is the most significant barrier to effective long-term treatment of HIV infection. The most common causes of resistance development are patient noncompliance and pre-existence of resistant strains. In this paper, methods of antiviral regimen switching are developed that minimize the risk of pre-existing resistant virus emerging during therapy switches necessitated by virological failure. Two distinct cases are considered; a single previous virological failure and multiple virological failures. These methods use optimal control approaches on experimentally verified mathematical models of HIV strain competition and statistical models of resistance risk. It is shown that, theoretically, order-of-magnitude reduction in risk can be achieved, and multiple previous virological failures enable greater success of these methods in reducing the risk of subsequent treatment failures. PMID:22073250
Park, Jong-Ho; Ovbiagele, Bruce
2015-01-01
Background Optimal combination of secondary stroke prevention treatment including antihypertensives, antithrombotic agents, and lipid modifiers is associated with reduced recurrent vascular risk including stroke. It is unclear whether optimal combination treatment has a differential impact on stroke patients based on level of vascular risk. Methods We analyzed a clinical trial dataset comprising 3680 recent non-cardioembolic stroke patients aged ≥35 years and followed for 2 years. Patients were categorized by appropriateness level 0 to III depending on the number of the drugs prescribed divided by the number of drugs potentially indicated for each patient (0=none of the indicated medications prescribed and III=all indicated medications prescribed [optimal combination treatment]). High-risk was defined as having a history of stroke or coronary heart disease (CHD) prior to the index stroke event. Independent associations of medication appropriateness level with a major vascular event (stroke, CHD, or vascular death), ischemic stroke, and all-cause death were analyzed. Results Compared with level 0, for major vascular events, the HR of level III in the low-risk group was 0.51 (95% CI: 0.20–1.28) and 0.32 (0.14–0.70) in the high-risk group; for stroke, the HR of level III in the low-risk group was 0.54 (0.16–1.77) and 0.25 (0.08–0.85) in the high-risk group; and for all-cause death, the HR of level III in the low-risk group was 0.66 (0.09–5.00) and 0.22 (0.06–0.78) in the high-risk group. Conclusion Optimal combination treatment is related to a significantly lower risk of future vascular events and death among high-risk patients after a recent non-cardioembolic stroke. PMID:26044963
Decompression schedule optimization with an isoprobabilistic risk of decompression sickness.
Horn, Beverley J; Wake, Graeme C; Anthony, T Gavin
2006-01-01
Divers use decompression schedules to reduce the probability of occurrence of decompression sickness when returning to the surface at the end of a dive. The probability of decompression sickness resulting from these schedules varies across different dives and the models used to generate them. Usually the diver is unaware of this variance in risk. This paper describes an investigation into the feasibility of producing optimized iso-probabilistic decompression schedules that minimize the time it takes for a diver to reach the surface. The decompression schedules were optimized using the sequential quadratic programming method (SQP), which minimizes the ascent time for a given probability of decompression sickness. The U.S. linear-exponential multi-gas model was used to calculate an estimate of the probability of decompression sickness for a given dive. In particular 1.3-bar oxygen in helium rebreather bounce dives to between 18 m and 81 m were considered and compared against the UK Navy QinetiQ 90 tables for a similar estimate of probability of decompression sickness. The SQP method reliably produced schedules with fast and stable convergence to an optimized solution. Comparison of the optimized decompression schedules with the QinetiQ 90 schedules showed similar stop times for shallow dives to 18 m. For dives with a maximum depth of 39 m to 81 m, optimizing the decompression resulted in savings in decompression time of up to 30 min. This paper has shown that it is feasible to produce optimized iso-probabilistic decompression tables given a reliable risk model for decompression sickness and appropriate dive trials.
Optimal CO2 mitigation under damage risk valuation
NASA Astrophysics Data System (ADS)
Crost, Benjamin; Traeger, Christian P.
2014-07-01
The current generation has to set mitigation policy under uncertainty about the economic consequences of climate change. This uncertainty governs both the level of damages for a given level of warming, and the steepness of the increase in damage per warming degree. Our model of climate and the economy is a stochastic version of a model employed in assessing the US Social Cost of Carbon (DICE). We compute the optimal carbon taxes and CO2 abatement levels that maximize welfare from economic consumption over time under different risk states. In accordance with recent developments in finance, we separate preferences about time and risk to improve the model's calibration of welfare to observed market interest. We show that introducing the modern asset pricing framework doubles optimal abatement and carbon taxation. Uncertainty over the level of damages at a given temperature increase can result in a slight increase of optimal emissions as compared to using expected damages. In contrast, uncertainty governing the steepness of the damage increase in temperature results in a substantially higher level of optimal mitigation.
Coverage-based constraints for IMRT optimization
NASA Astrophysics Data System (ADS)
Mescher, H.; Ulrich, S.; Bangert, M.
2017-09-01
Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.
Gu, C; Rao, D C
1997-07-01
We are concerned here with practical issues in the application of extreme sib-pair (ESP) methods to quantitative traits. Two important factors-namely, the way extreme trait values are defined and the proportions in which different types of ESPs are pooled, in the analysis-are shown to determine the power and the cost effectiveness of a study design. We found that, in general, combining reasonable numbers of both extremely discordant and extremely concordant sib pairs that were available in the sample is more powerful and more cost effective than pursuing only a single type of ESP. We also found that dividing trait values with a less extreme threshold at one end or at both ends of the trait distribution leads to more cost-effective designs. The notion of generalized relative risk ratios (the lambda methods, as described in the first part of this series of two articles) is used to calculate the power and sample size for various choices of polychotomization of trait values and for the combination of different types of ESPs. A balance then can be struck among these choices, to attain an optimum design.
Vehicle Shield Optimization and Risk Assessment of Future NEO Missions
NASA Technical Reports Server (NTRS)
Nounu, Hatem, N.; Kim, Myung-Hee; Cucinotta, Francis A.
2011-01-01
Future human space missions target far destinations such as Near Earth Objects (NEO) or Mars that require extended stay in hostile radiation environments in deep space. The continuous assessment of exploration vehicles is needed to iteratively optimize the designs for shielding protection and calculating the risks associated with such long missions. We use a predictive software capability that calculates the risks to humans inside a spacecraft. The software uses the CAD software Pro/Engineer and Fishbowl tool kit to quantify the radiation shielding properties of the spacecraft geometry by calculating the areal density seen at a certain point, dose point, inside the spacecraft. The shielding results are used by NASA-developed software, BRYNTRN, to quantify the organ doses received in a human body located in the vehicle in a possible solar particle events (SPE) during such prolonged space missions. The organ doses are used to quantify the risks posed on the astronauts' health and life using NASA Space Cancer Model software. An illustration of the shielding optimization and risk calculation on an exploration vehicle design suitable for a NEO mission is provided in this study. The vehicle capsule is made of aluminum shell, airlock with hydrogen-rich carbon composite material end caps. The capsule contains sets of racks that surround a working and living area. A water shelter is provided in the middle of the vehicle to enhance the shielding in case of SPE. The mass distribution is optimized to minimize radiation hotspots and an assessment of the risks associated with a NEO mission is calculated.
Requirements based system risk modeling
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Cornford, Steven; Feather, Martin
2004-01-01
The problem that we address in this paper is assessing the expected degree of success of the system or mission based on the degree to which each requirement is satisfied and the relative weight of the requirements. We assume a complete list of the requirements, the relevant risk elements and their probability of occurrence and the quantified effect of the risk elements on the requirements. In order to assess the degree to which each requirement is satisfied, we need to determine the effect of the various risk elements on the requirement.
Game Theory and Risk-Based Levee System Design
NASA Astrophysics Data System (ADS)
Hui, R.; Lund, J. R.; Madani, K.
2014-12-01
Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.
Liu, Hai-Long; Wang, Jiang; Lin, Dai-Zong; Liu, Hong
2014-01-01
Idiosyncratic adverse drug reactions (IDR) induce severe medical complications or even death in patients. Alert structure in drugs can be metabolized as reactive metabolite (RM) in the bodies, which is one of the major factors to induce IDR. Structure modification and avoidance of alert structure in the drug candidates is an efficient method for reducing toxicity risks in drug design. This review briefly summarized the recent development of the methodologies for structure optimization strategy to reduce the toxicity risks of drug candidates. These methods include blocking metabolic site, altering metabolic pathway, reducing activity, bioisosterism, and prodrug.
Surrogate-based Analysis and Optimization
NASA Technical Reports Server (NTRS)
Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin
2005-01-01
A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.
Optimization of vehicle deceleration to reduce occupant injury risks in frontal impact.
Mizuno, Koji; Itakura, Takuya; Hirabayashi, Satoko; Tanaka, Eiichi; Ito, Daisuke
2014-01-01
In vehicle frontal impacts, vehicle acceleration has a large effect on occupant loadings and injury risks. In this research, an optimal vehicle crash pulse was determined systematically to reduce injury measures of rear seat occupants by using mathematical simulations. The vehicle crash pulse was optimized based on a vehicle deceleration-deformation diagram under the conditions that the initial velocity and the maximum vehicle deformation were constant. Initially, a spring-mass model was used to understand the fundamental parameters for optimization. In order to investigate the optimization under a more realistic situation, the vehicle crash pulse was also optimized using a multibody model of a Hybrid III dummy seated in the rear seat for the objective functions of chest acceleration and chest deflection. A sled test using a Hybrid III dummy was carried out to confirm the simulation results. Finally, the optimal crash pulses determined from the multibody simulation were applied to a human finite element (FE) model. The optimized crash pulse to minimize the occupant deceleration had a concave shape: a high deceleration in the initial phase, low in the middle phase, and high again in the final phase. This crash pulse shape depended on the occupant restraint stiffness. The optimized crash pulse determined from the multibody simulation was comparable to that from the spring-mass model. From the sled test, it was demonstrated that the optimized crash pulse was effective for the reduction of chest acceleration. The crash pulse was also optimized for the objective function of chest deflection. The optimized crash pulse in the final phase was lower than that obtained for the minimization of chest acceleration. In the FE analysis of the human FE model, the optimized pulse for the objective function of the Hybrid III chest deflection was effective in reducing rib fracture risks. The optimized crash pulse has a concave shape and is dependent on the occupant restraint
Optimization-based Dynamic Human Lifting Prediction
2008-06-01
Anith Mathai, Steve Beck,Timothy Marler , Jingzhou Yang, Jasbir S. Arora, Karim Abdel-Malek Virtual Soldier Research Program, Center for Computer Aided...Rahmatalla, S., Kim, J., Marler , T., Beck, S., Yang, J., busek, J., Arora, J.S., and Abdel-Malek, K. Optimization-based dynamic human walking prediction
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
78 FR 43829 - Risk-Based Capital Guidelines; Market Risk
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-22
... CFR Parts 208 and 225 RIN 7100 AD-98 Risk-Based Capital Guidelines; Market Risk AGENCY: Board of... Governors of the Federal Reserve System (Board) proposes to revise its market risk capital rule (market risk... Organization for Economic Cooperation and Development (OECD), which are referenced in the Board's market risk...
78 FR 76521 - Risk-Based Capital Guidelines; Market Risk
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-18
... RIN 7100 AD-98 Risk-Based Capital Guidelines; Market Risk AGENCY: Board of Governors of the Federal...) is adopting a final rule that revises its market risk capital rule (market risk rule) to address... Cooperation and Development (OECD), which are referenced in the Board's market risk rule; to clarify the...
Parameter optimization toward optimal microneedle-based dermal vaccination.
van der Maaden, Koen; Varypataki, Eleni Maria; Yu, Huixin; Romeijn, Stefan; Jiskoot, Wim; Bouwstra, Joke
2014-11-20
Microneedle-based vaccination has several advantages over vaccination by using conventional hypodermic needles. Microneedles are used to deliver a drug into the skin in a minimally-invasive and potentially pain free manner. Besides, the skin is a potent immune organ that is highly suitable for vaccination. However, there are several factors that influence the penetration ability of the skin by microneedles and the immune responses upon microneedle-based immunization. In this study we assessed several different microneedle arrays for their ability to penetrate ex vivo human skin by using trypan blue and (fluorescently or radioactively labeled) ovalbumin. Next, these different microneedles and several factors, including the dose of ovalbumin, the effect of using an impact-insertion applicator, skin location of microneedle application, and the area of microneedle application, were tested in vivo in mice. The penetration ability and the dose of ovalbumin that is delivered into the skin were shown to be dependent on the use of an applicator and on the microneedle geometry and size of the array. Besides microneedle penetration, the above described factors influenced the immune responses upon microneedle-based vaccination in vivo. It was shown that the ovalbumin-specific antibody responses upon microneedle-based vaccination could be increased up to 12-fold when an impact-insertion applicator was used, up to 8-fold when microneedles were applied over a larger surface area, and up to 36-fold dependent on the location of microneedle application. Therefore, these influencing factors should be considered to optimize microneedle-based dermal immunization technologies.
Risk-based Spacecraft Fire Safety Experiments
NASA Technical Reports Server (NTRS)
Apostolakis, G.; Catton, I.; Issacci, F.; Paulos, T.; Jones, S.; Paxton, K.; Paul, M.
1992-01-01
Viewgraphs on risk-based spacecraft fire safety experiments are presented. Spacecraft fire risk can never be reduced to a zero probability. Probabilistic risk assessment is a tool to reduce risk to an acceptable level.
Shape optimization of pulsatile ventricular assist devices using FSI to minimize thrombotic risk
NASA Astrophysics Data System (ADS)
Long, C. C.; Marsden, A. L.; Bazilevs, Y.
2014-10-01
In this paper we perform shape optimization of a pediatric pulsatile ventricular assist device (PVAD). The device simulation is carried out using fluid-structure interaction (FSI) modeling techniques within a computational framework that combines FEM for fluid mechanics and isogeometric analysis for structural mechanics modeling. The PVAD FSI simulations are performed under realistic conditions (i.e., flow speeds, pressure levels, boundary conditions, etc.), and account for the interaction of air, blood, and a thin structural membrane separating the two fluid subdomains. The shape optimization study is designed to reduce thrombotic risk, a major clinical problem in PVADs. Thrombotic risk is quantified in terms of particle residence time in the device blood chamber. Methods to compute particle residence time in the context of moving spatial domains are presented in a companion paper published in the same issue (Comput Mech, doi: 10.1007/s00466-013-0931-y, 2013). The surrogate management framework, a derivative-free pattern search optimization method that relies on surrogates for increased efficiency, is employed in this work. For the optimization study shown here, particle residence time is used to define a suitable cost or objective function, while four adjustable design optimization parameters are used to define the device geometry. The FSI-based optimization framework is implemented in a parallel computing environment, and deployed with minimal user intervention. Using five SEARCH/ POLL steps the optimization scheme identifies a PVAD design with significantly better throughput efficiency than the original device.
Hatjimihail, Aristides T
2009-06-09
An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC) procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed.
Huxley, Rachel R.; Lopez, Faye L.; Folsom, Aaron R.; Agarwal, Sunil K.; Loehr, Laura R.; Soliman, Elsayed Z.; Maclehose, Rich; Konety, Suma; Alonso, Alvaro
2011-01-01
Background Atrial fibrillation (AF) is an important risk factor for stroke and overall mortality but information about the preventable burden of AF is lacking. The aim of this study was to determine what proportion of the burden of AF in African-Americans and whites could theoretically be avoided by the maintenance of an optimal risk profile. Methods and Results This study included 14,598 middle-aged, Atherosclerosis Risk in Communities Study cohort members. Previously established AF risk factors, namely high blood pressure, elevated body mass index, diabetes, cigarette smoking and prior cardiac disease were categorized into ‘optimal’, ‘borderline’ and ‘elevated’ levels. Based on their risk factor levels, individuals were classified into one of these three groups. The population attributable fraction of AF due to having a non-optimal risk profile was estimated separately for African-American and white men and women. During a mean follow-up of 17.1 years, 1520 cases of incident AF were identified. The age-adjusted incidence rates were highest in white men and lowest in African-American women (7.45 and 3.67 per 1000 person-years, respectively). The overall prevalence of an optimal risk profile was 5.4% but this varied according to race and gender: 10% in white women versus 1.6% in African-American men. Overall, 56.5% of AF cases could be explained by having ≥ 1 elevated risk factors, of which elevated blood pressure was the most important contributor. Conclusions As with other forms of cardiovascular disease, more than half of the AF burden is potentially avoidable through the optimization of cardiovascular risk factors levels. PMID:21444879
The Role of Text Messaging in Cardiovascular Risk Factor Optimization.
Klimis, Harry; Khan, Mohammad Ehsan; Kok, Cindy; Chow, Clara K
2017-01-01
Many cases of CVD may be avoidable through lowering behavioural risk factors such as smoking and physical inactivity. Mobile health (mHealth) provides a novel opportunity to deliver cardiovascular prevention programs in a format that is potentially scalable. Here, we provide an overview of text messaging-based mHealth interventions in cardiovascular prevention. Text messaging-based interventions appear effective on a range of behavioural risk factors and can effect change on multiple risk factors-e.g. smoking, weight, blood pressure-simultaneously. For many texting studies, there are challenges in interpretation as many texting interventions are part of larger complex interventions making it difficult to determine the benefits of the separate components. Whilst there is evidence for text messaging improving cardiovascular risk factor levels in the short-term, future studies are needed to examine the durability of these effects and whether they can be translated to improvements in clinical care and outcomes.
Isotretinoin Oil-Based Capsule Formulation Optimization
Tsai, Pi-Ju; Huang, Chi-Te; Lee, Chen-Chou; Li, Chi-Lin; Huang, Yaw-Bin; Tsai, Yi-Hung; Wu, Pao-Chu
2013-01-01
The purpose of this study was to develop and optimize an isotretinoin oil-based capsule with specific dissolution pattern. A three-factor-constrained mixture design was used to prepare the systemic model formulations. The independent factors were the components of oil-based capsule including beeswax (X1), hydrogenated coconut oil (X2), and soybean oil (X3). The drug release percentages at 10, 30, 60, and 90 min were selected as responses. The effect of formulation factors including that on responses was inspected by using response surface methodology (RSM). Multiple-response optimization was performed to search for the appropriate formulation with specific release pattern. It was found that the interaction effect of these formulation factors (X1X2, X1X3, and X2X3) showed more potential influence than that of the main factors (X1, X2, and X3). An optimal predicted formulation with Y10 min, Y30 min, Y60 min, and Y90 min release values of 12.3%, 36.7%, 73.6%, and 92.7% at X1, X2, and X3 of 5.75, 15.37, and 78.88, respectively, was developed. The new formulation was prepared and performed by the dissolution test. The similarity factor f2 was 54.8, indicating that the dissolution pattern of the new optimized formulation showed equivalence to the predicted profile. PMID:24068886
Functional approximation and optimal specification of the mechanical risk index.
Kaiser, Mark J; Pulsipher, Allan G
2005-10-01
The mechanical risk index (MRI) is a numerical measure that quantifies the complexity of drilling a well. The purpose of this article is to examine the role of the component factors of the MRI and its structural and parametric assumptions. A meta-modeling methodology is applied to derive functional expressions of the MRI, and it is shown that the MRI can be approximated in terms of a linear functional. The variation between the MRI measure and its functional specification is determined empirically, and for a reasonable design space, the functional specification is shown to a good approximating representation. A drilling risk index is introduced to quantify the uncertainty in the time and cost associated with drilling a well. A general methodology is outlined to create an optimal MRI specification.
LP based approach to optimal stable matchings
Teo, Chung-Piaw; Sethuraman, J.
1997-06-01
We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.
Optimization-based controller design for rotorcraft
NASA Technical Reports Server (NTRS)
Tsing, N.-K.; Fan, M. K. H.; Barlow, J.; Tits, A. L.; Tischler, M. B.
1993-01-01
An optimization-based methodology for linear control system design is outlined by considering the design of a controller for a UH-60 rotorcraft in hover. A wide range of design specifications is taken into account: internal stability, decoupling between longitudinal and lateral motions, handling qualities, and rejection of windgusts. These specifications are investigated while taking into account physical limitations in the swashplate displacements and rates of displacement. The methodology crucially relies on user-machine interaction for tradeoff exploration.
Optimal interference code based on machine learning
NASA Astrophysics Data System (ADS)
Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua
2016-10-01
In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.
Base distance optimization for SQUID gradiometers
Garachtchenko, A.; Matlashov, A.; Kraus, R.
1998-12-31
The measurement of magnetic fields generated by weak nearby biomagnetic sources is affected by ambient noise generated by distant sources both internal and external to the subject under study. External ambient noise results from sources with numerous origins, many of which are unpredictable in nature. Internal noise sources are biomagnetic in nature and result from muscle activity (such as the heart, eye blinks, respiration, etc.), pulsation associated with blood flow, surgical implants, etc. Any magnetic noise will interfere with measurements of magnetic sources of interest, such as magnetoencephalography (MEG), in various ways. One of the most effective methods of reducing the magnetic noise measured by the SQUID sensor is to use properly designed superconducting gradiometers. Here, the authors optimized the baseline length of SQUID-based symmetric axial gradiometers using computer simulation. The signal-to-noise ratio (SNR) was used as the optimization criteria. They found that in most cases the optimal baseline is not equal to the depth of the primary source, rather it has a more complex dependence on the gradiometer balance and the ambient magnetic noise. They studied both first and second order gradiometers in simulated shielded environments and only second order gradiometers in a simulated unshielded environment. The noise source was simulated as a distant dipolar source for the shielded cases. They present optimal gradiometer baseline lengths for the various simulated situations below.
Fuzzy based risk register for construction project risk assessment
NASA Astrophysics Data System (ADS)
Kuchta, Dorota; Ptaszyńska, Ewa
2017-07-01
The paper contains fuzzy based risk register used to identify risks which appear in construction projects and to assess their attributes. Risk is considered here as a possible event with negative consequences for the project [4]. We use different risk attributes in the proposed risk register. Values of risk attributes are generated by using fuzzy numbers. Specific risk attributes have different importance for project managers of construction projects. To compare specific risk attributes we use methods of fuzzy numbers ranking. The main strengths of the proposed concept in managing construction projects are also presented in the paper.
Optimal perfusion during cardiopulmonary bypass: an evidence-based approach.
Murphy, Glenn S; Hessel, Eugene A; Groom, Robert C
2009-05-01
In this review, we summarize the best available evidence to guide the conduct of adult cardiopulmonary bypass (CPB) to achieve "optimal" perfusion. At the present time, there is considerable controversy relating to appropriate management of physiologic variables during CPB. Low-risk patients tolerate mean arterial blood pressures of 50-60 mm Hg without apparent complications, although limited data suggest that higher-risk patients may benefit from mean arterial blood pressures >70 mm Hg. The optimal hematocrit on CPB has not been defined, with large data-based investigations demonstrating that both severe hemodilution and transfusion of packed red blood cells increase the risk of adverse postoperative outcomes. Oxygen delivery is determined by the pump flow rate and the arterial oxygen content and organ injury may be prevented during more severe hemodilutional anemia by increasing pump flow rates. Furthermore, the optimal temperature during CPB likely varies with physiologic goals, and recent data suggest that aggressive rewarming practices may contribute to neurologic injury. The design of components of the CPB circuit may also influence tissue perfusion and outcomes. Although there are theoretical advantages to centrifugal blood pumps over roller pumps, it has been difficult to demonstrate that the use of centrifugal pumps improves clinical outcomes. Heparin coating of the CPB circuit may attenuate inflammatory and coagulation pathways, but has not been clearly demonstrated to reduce major morbidity and mortality. Similarly, no distinct clinical benefits have been observed when open venous reservoirs have been compared to closed systems. In conclusion, there are currently limited data upon which to confidently make strong recommendations regarding how to conduct optimal CPB. There is a critical need for randomized trials assessing clinically significant outcomes, particularly in high-risk patients.
EUD-based biological optimization for carbon ion therapy
Brüningk, Sarah C. Kamp, Florian; Wilkens, Jan J.
2015-11-15
therapy, the optimization by biological objective functions resulted in slightly superior treatment plans in terms of final EUD for the organs at risk (OARs) compared to voxel-based optimization approaches. This observation was made independent of the underlying objective function metric. An absolute gain in OAR sparing was observed for quadratic objective functions, whereas intersecting DVHs were found for logistic approaches. Even for considerable under- or overestimations of the used effect- or dose–volume parameters during the optimization, treatment plans were obtained that were of similar quality as the results of a voxel-based optimization. Conclusions: EUD-based optimization with either of the presented concepts can successfully be applied to treatment plan optimization. This makes EUE-based optimization for carbon ion therapy a useful tool to optimize more specifically in the sense of biological outcome while voxel-to-voxel variations of the biological effectiveness are still properly accounted for. This may be advantageous in terms of computational cost during treatment plan optimization but also enables a straight forward comparison of different fractionation schemes or treatment modalities.
Assessment of Medical Risks and Optimization of their Management using Integrated Medical Model
NASA Technical Reports Server (NTRS)
Fitts, Mary A.; Madurai, Siram; Butler, Doug; Kerstman, Eric; Risin, Diana
2008-01-01
The Integrated Medical Model (IMM) Project is a software-based technique that will identify and quantify the medical needs and health risks of exploration crew members during space flight and evaluate the effectiveness of potential mitigation strategies. The IMM Project employs an evidence-based approach that will quantify probability and consequences of defined in-flight medical risks, mitigation strategies, and tactics to optimize crew member health. Using stochastic techniques, the IMM will ultimately inform decision makers at both programmatic and institutional levels and will enable objective assessment of crew health and optimization of mission success using data from relevant cohort populations and from the astronaut population. The objectives of the project include: 1) identification and documentation of conditions that may occur during exploration missions (Baseline Medical Conditions List [BMCL), 2) assessment of the likelihood of conditions in the BMCL occurring during exploration missions (incidence rate), 3) determination of the risk associated with these conditions and quantify in terms of end states (Loss of Crew, Loss of Mission, Evacuation), 4) optimization of in-flight hardware mass, volume, power, bandwidth and cost for a given level of risk or uncertainty, and .. validation of the methodologies used.
Cancer risk assessment: Optimizing human health through linear dose-response models.
Calabrese, Edward J; Shamoun, Dima Yazji; Hanekamp, Jaap C
2015-07-01
This paper proposes that generic cancer risk assessments be based on the integration of the Linear Non-Threshold (LNT) and hormetic dose-responses since optimal hormetic beneficial responses are estimated to occur at the dose associated with a 10(-4) risk level based on the use of a LNT model as applied to animal cancer studies. The adoption of the 10(-4) risk estimate provides a theoretical and practical integration of two competing risk assessment models whose predictions cannot be validated in human population studies or with standard chronic animal bioassay data. This model-integration reveals both substantial protection of the population from cancer effects (i.e. functional utility of the LNT model) while offering the possibility of significant reductions in cancer incidence should the hormetic dose-response model predictions be correct. The dose yielding the 10(-4) cancer risk therefore yields the optimized toxicologically based "regulatory sweet spot". Copyright © 2015 Elsevier Ltd. All rights reserved.
Smell Detection Agent Based Optimization Algorithm
NASA Astrophysics Data System (ADS)
Vinod Chandra, S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
Optimization of Seismicity-Based Forecasts
NASA Astrophysics Data System (ADS)
Tiampo, Kristy F.; Shcherbakov, Robert
2013-01-01
In this paper, the extent to which some improvement can be made in seismicity-based earthquake forecasting methods are examined. Two methods that employ the statistics and locations for past smaller earthquakes to determine the location of future large earthquakes, the pattern informatics (PI) index and the Benioff relative intensity (RI), are employed for both global and regional forecasting. Two approaches for forecast parameter estimation, the TM metric and threshold optimization, are applied to these methods and the results evaluated. Application of the TM metric allows for estimation of both the training and forecast time intervals as well as the minimum magnitude cutoff and spatial discretization. The threshold optimization scheme is employed in order to formulate a binary forecast that maximizes the Pierce's skill score. The combined application of these techniques is successful in forecasting those large events that occurred in Haiti, Chile, and California in 2010, on both global and regional scales.
Risk analysis of heat recovery steam generator with semi quantitative risk based inspection API 581
Prayogo, Galang Sandy Haryadi, Gunawan Dwi; Ismail, Rifky; Kim, Seon Jin
2016-04-19
Corrosion is a major problem that most often occurs in the power plant. Heat recovery steam generator (HRSG) is an equipment that has a high risk to the power plant. The impact of corrosion damage causing HRSG power plant stops operating. Furthermore, it could be threaten the safety of employees. The Risk Based Inspection (RBI) guidelines by the American Petroleum Institute (API) 58 has been used to risk analysis in the HRSG 1. By using this methodology, the risk that caused by unexpected failure as a function of the probability and consequence of failure can be estimated. This paper presented a case study relating to the risk analysis in the HRSG, starting with a summary of the basic principles and procedures of risk assessment and applying corrosion RBI for process industries. The risk level of each HRSG equipment were analyzed: HP superheater has a medium high risk (4C), HP evaporator has a medium-high risk (4C), and the HP economizer has a medium risk (3C). The results of the risk assessment using semi-quantitative method of standard API 581 based on the existing equipment at medium risk. In the fact, there is no critical problem in the equipment components. Damage mechanisms were prominent throughout the equipment is thinning mechanism. The evaluation of the risk approach was done with the aim of reducing risk by optimizing the risk assessment activities.
Risk analysis of heat recovery steam generator with semi quantitative risk based inspection API 581
NASA Astrophysics Data System (ADS)
Prayogo, Galang Sandy; Haryadi, Gunawan Dwi; Ismail, Rifky; Kim, Seon Jin
2016-04-01
Corrosion is a major problem that most often occurs in the power plant. Heat recovery steam generator (HRSG) is an equipment that has a high risk to the power plant. The impact of corrosion damage causing HRSG power plant stops operating. Furthermore, it could be threaten the safety of employees. The Risk Based Inspection (RBI) guidelines by the American Petroleum Institute (API) 58 has been used to risk analysis in the HRSG 1. By using this methodology, the risk that caused by unexpected failure as a function of the probability and consequence of failure can be estimated. This paper presented a case study relating to the risk analysis in the HRSG, starting with a summary of the basic principles and procedures of risk assessment and applying corrosion RBI for process industries. The risk level of each HRSG equipment were analyzed: HP superheater has a medium high risk (4C), HP evaporator has a medium-high risk (4C), and the HP economizer has a medium risk (3C). The results of the risk assessment using semi-quantitative method of standard API 581 based on the existing equipment at medium risk. In the fact, there is no critical problem in the equipment components. Damage mechanisms were prominent throughout the equipment is thinning mechanism. The evaluation of the risk approach was done with the aim of reducing risk by optimizing the risk assessment activities.
Risk-based planning analysis for a single levee
NASA Astrophysics Data System (ADS)
Hui, Rui; Jachens, Elizabeth; Lund, Jay
2016-04-01
Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.
White, Melanie J; Cunningham, Lauren C; Titchener, Kirsteen
2011-07-01
This study aimed to determine whether two brief, low cost interventions would reduce young drivers' optimism bias for their driving skills and accident risk perceptions. This tendency for such drivers to perceive themselves as more skillful and less prone to driving accidents than their peers may lead to less engagement in precautionary driving behaviours and a greater engagement in more dangerous driving behaviour. 243 young drivers (aged 17-25 years) were randomly allocated to one of three groups: accountability, insight or control. All participants provided both overall and specific situation ratings of their driving skills and accident risk relative to a typical young driver. Prior to completing the questionnaire, those in the accountability condition were first advised that their driving skills and accident risk would be later assessed via a driving simulator. Those in the insight condition first underwent a difficult computer-based hazard perception task designed to provide participants with insight into their potential limitations when responding to hazards in difficult and unpredictable driving situations. Participants in the control condition completed only the questionnaire. Results showed that the accountability manipulation was effective in reducing optimism bias in terms of participants' comparative ratings of their accident risk in specific situations, though only for less experienced drivers. In contrast, among more experienced males, participants in the insight condition showed greater optimism bias for overall accident risk than their counterparts in the accountability or control groups. There were no effects of the manipulations on drivers' skills ratings. The differential effects of the two types of manipulations on optimism bias relating to one's accident risk in different subgroups of the young driver sample highlight the importance of targeting interventions for different levels of experience. Accountability interventions may be beneficial for
Pixel-based OPC optimization based on conjugate gradients.
Ma, Xu; Arce, Gonzalo R
2011-01-31
Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.
Man, Jun; Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Wave based optimization of distributed vibration absorbers
NASA Astrophysics Data System (ADS)
Johnson, Marty; Batton, Brad
2005-09-01
The concept of distributed vibration absorbers or DVAs has been investigated in recent years as a method of vibration control and sound radiation control for large flexible structures. These devices are comprised of a distributed compliant layer with a distributed mass layer. When such a device is placed onto a structure it forms a sandwich panel configuration with a very soft core. With this configuration the main effect of the DVA is to create forces normal to the surface of the structure and can be used at low frequencies to either add damping, where constrain layer damper treatments are not very effective, or to pin the structure over a narrow frequency bandwidth (i.e., large input impedance/vibration absorber approach). This paper analyses the behavior of these devices using a wave based approach and finds an optimal damping level for the control of broadband disturbances in panels. The optimal design is calculated by solving the differential equations for waves propagating in coupled plates. It is shown that the optimal damping calculated using the infinite case acts as a good ``rule of thumb'' for designing DVAs to control the vibration of finite panels. This is bourn out in both numerical simulations and experiments.
GPU-based ultrafast IMRT plan optimization.
Men, Chunhua; Gu, Xuejun; Choi, Dongju; Majumdar, Amitava; Zheng, Ziyi; Mueller, Klaus; Jiang, Steve B
2009-11-07
The widespread adoption of on-board volumetric imaging in cancer radiotherapy has stimulated research efforts to develop online adaptive radiotherapy techniques to handle the inter-fraction variation of the patient's geometry. Such efforts face major technical challenges to perform treatment planning in real time. To overcome this challenge, we are developing a supercomputing online re-planning environment (SCORE) at the University of California, San Diego (UCSD). As part of the SCORE project, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) optimization algorithm on graphics processing units (GPUs). We adopt a penalty-based quadratic optimization model, which is solved by using a gradient projection method with Armijo's line search rule. Our optimization algorithm has been implemented in CUDA for parallel GPU computing as well as in C for serial CPU computing for comparison purpose. A prostate IMRT case with various beamlet and voxel sizes was used to evaluate our implementation. On an NVIDIA Tesla C1060 GPU card, we have achieved speedup factors of 20-40 without losing accuracy, compared to the results from an Intel Xeon 2.27 GHz CPU. For a specific nine-field prostate IMRT case with 5 x 5 mm(2) beamlet size and 2.5 x 2.5 x 2.5 mm(3) voxel size, our GPU implementation takes only 2.8 s to generate an optimal IMRT plan. Our work has therefore solved a major problem in developing online re-planning technologies for adaptive radiotherapy.
Application of Chance-Constrained Stochastic Optimization for Mitigating Downstream Flood Risks
NASA Astrophysics Data System (ADS)
Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Mainardi Fan, Fernando; Assis dos Reis, Alberto
2015-04-01
Ensemble forecasting is a growing field in the operation of multi-purpose reservoirs and mitigation of flood risks in downstream river reaches. The assessment of uncertainty over the prediction horizon provides added value to the operational flow forecasting system. It provides probabilistic inflow forecasts which, combined with decision support systems, determine optimum release strategies. One way of doing this is through scenario tree-based stochastic optimization. The representation of the ensemble forecasted is converted into a scenario-tree and optimized based on an adaptive multi-stage stochastic optimization. Typical inequality constraints applied over this problem require a full compliance of each ensemble's trajectory and neglect the uncertainty distribution. We propose the application of chance constrained optimization to overcome such problem, allowing a more flexible approach that does not depend on the number of ensembles but rather on the distribution of uncertainty. This technique is used to compute release trajectories of the reservoirs over a finite forecast horizon of up to 14 days by integrating a nonlinear gradient-based optimization algorithm and a model of the water system. The latter consists of simulation components for pool routing and kinematic or diffusive wave models for the downstream river reaches including a simulation mode and a reverse adjoint mode for the efficient computation of first-order derivatives. This framework has been implemented for a reservoir system operated by the Brazilian Companhia Energética de Minas Gerais S.A. (CEMIG). We present results obtained for the operation of the Tres Marias reservoir in the Brazilian state of Minas Gerais with a catchment area of near 55,000 km2. The focus of our discussion is the impact of chance constrains on the optimization procedure and its flexibility for extending the number of ensemble forecasts thus providing a more accurate representation of uncertainty. We compare the
Adaptive control based on retrospective cost optimization
NASA Astrophysics Data System (ADS)
Santillo, Mario A.
This dissertation studies adaptive control of multi-input, multi-output, linear, time-invariant, discrete-time systems that are possibly unstable and nonminimum phase. We consider both gradient-based adaptive control as well as retrospective-cost-based adaptive control. Retrospective cost optimization is a measure of performance at the current time based on a past window of data and without assumptions about the command or disturbance signals. In particular, retrospective cost optimization acts as an inner loop to the adaptive control algorithm by modifying the performance variables based on the difference between the actual past control inputs and the recomputed past control inputs based on the current control law. We develop adaptive control algorithms that are effective for systems that are nonminimum phase. We consider discrete-time adaptive control since these control laws can be implemented directly in embedded code without requiring an intermediate discretization step. Furthermore, the adaptive controllers in this dissertation are developed under minimal modeling assumptions. In particular, the adaptive controllers require knowledge of the sign of the high-frequency gain and a sufficient number of Markov parameters to approximate the nonminimum-phase zeros (if any). No additional modeling information is necessary. The adaptive controllers presented in this dissertation are developed for full-state-feedback stabilization, static-output-feedback stabilization, as well as dynamic compensation for stabilization, command following, disturbance rejection, and model reference adaptive control. Lyapunov-based stability and convergence proofs are provided for special cases. We present numerical examples to illustrate the algorithms' effectiveness in handling systems that are unstable and/or nonminimum phase and to provide insight into the modeling information required for controller implementation.
Decision making in flood risk based storm sewer network design.
Sun, S A; Djordjević, S; Khu, S T
2011-01-01
It is widely recognised that flood risk needs to be taken into account when designing a storm sewer network. Flood risk is generally a combination of flood consequences and flood probabilities. This paper aims to explore the decision making in flood risk based storm sewer network design. A multiobjective optimization is proposed to find the Pareto front of optimal designs in terms of low construction cost and low flood risk. The decision making process then follows this multi-objective optimization to select a best design from the Pareto front. The traditional way of designing a storm sewer system based on a predefined design storm is used as one of the decision making criteria. Additionally, three commonly used risk based criteria, i.e., the expected flood risk based criterion, the Hurwicz criterion and the stochastic dominance based criterion, are investigated and applied in this paper. Different decisions are made according to different criteria as a result of different concerns represented by the criteria. The proposed procedure is applied to a simple storm sewer network design to demonstrate its effectiveness and the different criteria are compared.
NASA Astrophysics Data System (ADS)
Peng, Rui; Li, Yan-Fu; Zhang, Jun-Guang; Li, Xiang
2015-07-01
Most existing research on software release time determination assumes that parameters of the software reliability model (SRM) are deterministic and the reliability estimate is accurate. In practice, however, there exists a risk that the reliability requirement cannot be guaranteed due to the parameter uncertainties in the SRM, and such risk can be as high as 50% when the mean value is used. It is necessary for the software project managers to reduce the risk to a lower level by delaying the software release, which inevitably increases the software testing costs. In order to incorporate the managers' preferences over these two factors, a decision model based on multi-attribute utility theory (MAUT) is developed for the determination of optimal risk-reduction release time.
Optimal network solution for proactive risk assessment and emergency response
NASA Astrophysics Data System (ADS)
Cai, Tianxing
Coupled with the continuous development in the field industrial operation management, the requirement for operation optimization in large scale manufacturing network has provoked more interest in the research field of engineering. Compared with the traditional way to take the remedial measure after the occurrence of the emergency event or abnormal situation, the current operation control calls for more proactive risk assessment to set up early warning system and comprehensive emergency response planning. Among all the industries, chemical industry and energy industry have higher opportunity to face with the abnormal and emergency situations due to their own industry characterization. Therefore the purpose of the study is to develop methodologies to give aid in emergency response planning and proactive risk assessment in the above two industries. The efficacy of the developed methodologies is demonstrated via two industrial real problems. The first case is to handle energy network dispatch optimization under emergency of local energy shortage under extreme conditions such as earthquake, tsunami, and hurricane, which may cause local areas to suffer from delayed rescues, widespread power outages, tremendous economic losses, and even public safety threats. In such urgent events of local energy shortage, agile energy dispatching through an effective energy transportation network, targeting the minimum energy recovery time, should be a top priority. The second case is a scheduling methodology to coordinate multiple chemical plants' start-ups in order to minimize regional air quality impacts under extreme meteorological conditions. The objective is to reschedule multi-plant start-up sequence to achieve the minimum sum of delay time compared to the expected start-up time of each plant. All these approaches can provide quantitative decision support for multiple stake holders, including government and environment agencies, chemical industry, energy industry and local
Integrated testing strategies can be optimal for chemical risk classification.
Raseta, Marko; Pitchford, Jon; Cussens, James; Doe, John
2017-08-01
There is an urgent need to refine strategies for testing the safety of chemical compounds. This need arises both from the financial and ethical costs of animal tests, but also from the opportunities presented by new in-vitro and in-silico alternatives. Here we explore the mathematical theory underpinning the formulation of optimal testing strategies in toxicology. We show how the costs and imprecisions of the various tests, and the variability in exposures and responses of individuals, can be assembled rationally to form a Markov Decision Problem. We compute the corresponding optimal policies using well developed theory based on Dynamic Programming, thereby identifying and overcoming some methodological and logical inconsistencies which may exist in the current toxicological testing. By illustrating our methods for two simple but readily generalisable examples we show how so-called integrated testing strategies, where information of different precisions from different sources is combined and where different initial test outcomes lead to different sets of future tests, can arise naturally as optimal policies. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimizing mesenchymal stem cell-based therapeutics.
Wagner, Joseph; Kean, Thomas; Young, Randell; Dennis, James E; Caplan, Arnold I
2009-10-01
Mesenchymal stem cell (MSC)-based therapeutics are showing significant benefit in multiple clinical trials conducted by both academic and commercial organizations, but obstacles remain for their large-scale commercial implementation. Recent studies have attempted to optimize MSC-based therapeutics by either enhancing their potency or increasing their delivery to target tissues. Overexpression of trophic factors or in vitro exposure to potency-enhancing factors are two approaches that are demonstrating success in preclinical animal models. Delivery enhancement strategies involving tissue-specific cytokine pathways or binding sites are also showing promise. Each of these strategies has its own set of distinct advantages and disadvantages when viewed with a mindset of ultimate commercialization and clinical utility.
Combinatorial Algorithms for Portfolio Optimization Problems - Case of Risk Moderate Investor
NASA Astrophysics Data System (ADS)
Juarna, A.
2017-03-01
Portfolio optimization problem is a problem of finding optimal combination of n stocks from N ≥ n available stocks that gives maximal aggregate return and minimal aggregate risk. In this paper given N = 43 from the IDX (Indonesia Stock Exchange) group of the 45 most-traded stocks, known as the LQ45, with p = 24 data of monthly returns for each stock, spanned over interval 2013-2014. This problem actually is a combinatorial one where its algorithm is constructed based on two considerations: risk moderate type of investor and maximum allowed correlation coefficient between every two eligible stocks. The main outputs resulted from implementation of the algorithms is a multiple curve of three portfolio’s attributes, e.g. the size, the ratio of return to risk, and the percentage of negative correlation coefficient for every two chosen stocks, as function of maximum allowed correlation coefficient between each two stocks. The output curve shows that the portfolio contains three stocks with ratio of return to risk at 14.57 if the maximum allowed correlation coefficient between every two eligible stocks is negative and contains 19 stocks with maximum allowed correlation coefficient 0.17 to get maximum ratio of return to risk at 25.48.
Optimizing footwear for older people at risk of falls.
Menant, Jasmine C; Steele, Julie R; Menz, Hylton B; Munro, Bridget J; Lord, Stephen R
2008-01-01
Footwear influences balance and the subsequent risk of slips, trips, and falls by altering somatosensory feedback to the foot and ankle and modifying frictional conditions at the shoe/floor interface. Walking indoors barefoot or in socks and walking indoors or outdoors in high-heel shoes have been shown to increase the risk of falls in older people. Other footwear characteristics such as heel collar height, sole hardness, and tread and heel geometry also influence measures of balance and gait. Because many older people wear suboptimal shoes, maximizing safe shoe use may offer an effective fall prevention strategy. Based on findings of a systematic literature review, older people should wear shoes with low heels and firm slip-resistant soles both inside and outside the home. Future research should investigate the potential benefits of tread sole shoes for preventing slips and whether shoes with high collars or flared soles can enhance balance when challenging tasks are undertaken.
77 FR 53059 - Risk-Based Capital Guidelines: Market Risk
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-30
... complexity. The agencies have incorporated a number of changes into the final rule based on feedback received... would generate a risk-based capital requirement for a specific covered position or portfolio of covered positions that is not commensurate with the risks of the covered position or portfolio. In...
CFD based draft tube hydraulic design optimization
NASA Astrophysics Data System (ADS)
McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.
2014-03-01
The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a
Risk-based maintenance of ethylene oxide production facilities.
Khan, Faisal I; Haddara, Mahmoud R
2004-05-20
This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.
Garner, Melissa J; McGregor, Bonnie A; Murphy, Karly M; Koenig, Alex L; Dolan, Emily D; Albano, Denise
2015-12-01
Breast cancer risk is a chronic stressor associated with depression. Optimism is associated with lower levels of depression among breast cancer survivors. However, to our knowledge, no studies have explored the relationship between optimism and depression among women at risk for breast cancer. We hypothesized that women at risk for breast cancer who have higher levels of optimism would report lower levels of depression and that social support would mediate this relationship. Participants (N = 199) with elevated distress were recruited from the community and completed self-report measures of depression, optimism, and social support. Participants were grouped based on their family history of breast cancer. Path analysis was used to examine the cross-sectional relationship between optimism, social support, and depressive symptoms in each group. Results indicated that the variance in depressive symptoms was partially explained through direct paths from optimism and social support among women with a family history of breast cancer. The indirect path from optimism to depressive symptoms via social support was significant (β = -.053; 90% CI = -.099 to -.011, p = .037) in this group. However, among individuals without a family history of breast cancer, the indirect path from optimism to depressive symptoms via social support was not significant. These results suggest that social support partially mediates the relationship between optimism and depression among women at risk for breast cancer. Social support may be an important intervention target to reduce depression among women at risk for breast cancer. Copyright © 2015 John Wiley & Sons, Ltd.
2012-02-24
GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.
Risks and Risk-Based Regulation in Higher Education Institutions
ERIC Educational Resources Information Center
Huber, Christian
2009-01-01
Risk-based regulation is a relatively new mode of governance. Not only does it offer a way of controlling institutions from the outside but it also provides the possibility of making an organisation's achievements visible/visualisable. This paper comments on a list of possible risks that higher education institutions have to face. In a second…
Doing our best: optimization and the management of risk.
Ben-Haim, Yakov
2012-08-01
Tools and concepts of optimization are widespread in decision-making, design, and planning. There is a moral imperative to "do our best." Optimization underlies theories in physics and biology, and economic theories often presume that economic agents are optimizers. We argue that in decisions under uncertainty, what should be optimized is robustness rather than performance. We discuss the equity premium puzzle from financial economics, and explain that the puzzle can be resolved by using the strategy of satisficing rather than optimizing. We discuss design of critical technological infrastructure, showing that satisficing of performance requirements--rather than optimizing them--is a preferable design concept. We explore the need for disaster recovery capability and its methodological dilemma. The disparate domains--economics and engineering--illuminate different aspects of the challenge of uncertainty and of the significance of robust-satisficing.
Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance
NASA Astrophysics Data System (ADS)
Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra
2017-06-01
In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.
Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance
NASA Astrophysics Data System (ADS)
Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra
2016-03-01
In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.
Bell-Curve Based Evolutionary Optimization Algorithm
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.
1998-01-01
The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.
Taber, Jennifer M; Klein, William M P; Ferrer, Rebecca A; Lewis, Katie L; Biesecker, Leslie G; Biesecker, Barbara B
2015-07-01
Dispositional optimism and risk perceptions are each associated with health-related behaviors and decisions and other outcomes, but little research has examined how these constructs interact, particularly in consequential health contexts. The predictive validity of risk perceptions for health-related information seeking and intentions may be improved by examining dispositional optimism as a moderator, and by testing alternate types of risk perceptions, such as comparative and experiential risk. Participants (n = 496) had their genomes sequenced as part of a National Institutes of Health pilot cohort study (ClinSeq®). Participants completed a cross-sectional baseline survey of various types of risk perceptions and intentions to learn genome sequencing results for differing disease risks (e.g., medically actionable, nonmedically actionable, carrier status) and to use this information to change their lifestyle/health behaviors. Risk perceptions (absolute, comparative, and experiential) were largely unassociated with intentions to learn sequencing results. Dispositional optimism and comparative risk perceptions interacted, however, such that individuals higher in optimism reported greater intentions to learn all 3 types of sequencing results when comparative risk was perceived to be higher than when it was perceived to be lower. This interaction was inconsistent for experiential risk and absent for absolute risk. Independent of perceived risk, participants high in dispositional optimism reported greater interest in learning risks for nonmedically actionable disease and carrier status, and greater intentions to use genome information to change their lifestyle/health behaviors. The relationship between risk perceptions and intentions may depend on how risk perceptions are assessed and on degree of optimism. (c) 2015 APA, all rights reserved.
Taber, Jennifer M.; Klein, William M. P.; Ferrer, Rebecca A.; Lewis, Katie L.; Biesecker, Leslie G.; Biesecker, Barbara B.
2015-01-01
Objective Dispositional optimism and risk perceptions are each associated with health-related behaviors and decisions and other outcomes, but little research has examined how these constructs interact, particularly in consequential health contexts. The predictive validity of risk perceptions for health-related information seeking and intentions may be improved by examining dispositional optimism as a moderator, and by testing alternate types of risk perceptions, such as comparative and experiential risk. Method Participants (n = 496) had their genomes sequenced as part of a National Institutes of Health pilot cohort study (ClinSeq®). Participants completed a cross-sectional baseline survey of various types of risk perceptions and intentions to learn genome sequencing results for differing disease risks (e.g., medically actionable, nonmedically actionable, carrier status) and to use this information to change their lifestyle/health behaviors. Results Risk perceptions (absolute, comparative, and experiential) were largely unassociated with intentions to learn sequencing results. Dispositional optimism and comparative risk perceptions interacted, however, such that individuals higher in optimism reported greater intentions to learn all 3 types of sequencing results when comparative risk was perceived to be higher than when it was perceived to be lower. This interaction was inconsistent for experiential risk and absent for absolute risk. Independent of perceived risk, participants high in dispositional optimism reported greater interest in learning risks for nonmedically actionable disease and carrier status, and greater intentions to use genome information to change their lifestyle/health behaviors. Conclusions The relationship between risk perceptions and intentions may depend on how risk perceptions are assessed and on degree of optimism. PMID:25313897
Risk-Sensitive Optimal Feedback Control Accounts for Sensorimotor Behavior under Uncertainty
Nagengast, Arne J.; Braun, Daniel A.; Wolpert, Daniel M.
2010-01-01
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. PMID:20657657
Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic
NASA Astrophysics Data System (ADS)
Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat
2017-03-01
The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.
Study of a risk-based piping inspection guideline system.
Tien, Shiaw-Wen; Hwang, Wen-Tsung; Tsai, Chih-Hung
2007-02-01
A risk-based inspection system and a piping inspection guideline model were developed in this study. The research procedure consists of two parts--the building of a risk-based inspection model for piping and the construction of a risk-based piping inspection guideline model. Field visits at the plant were conducted to develop the risk-based inspection and strategic analysis system. A knowledge-based model had been built in accordance with international standards and local government regulations, and the rational unified process was applied for reducing the discrepancy in the development of the models. The models had been designed to analyze damage factors, damage models, and potential damage positions of piping in the petrochemical plants. The purpose of this study was to provide inspection-related personnel with the optimal planning tools for piping inspections, hence, to enable effective predictions of potential piping risks and to enhance the better degree of safety in plant operations that the petrochemical industries can be expected to achieve. A risk analysis was conducted on the piping system of a petrochemical plant. The outcome indicated that most of the risks resulted from a small number of pipelines.
Nuclear insurance risk assessment using risk-based methodology
Wendland, W.G. )
1992-01-01
This paper presents American Nuclear Insurers' (ANI's) and Mutual Atomic Energy Liability Underwriters' (MAELU's) process and experience for conducting nuclear insurance risk assessments using a risk-based methodology. The process is primarily qualitative and uses traditional insurance risk assessment methods and an approach developed under the auspices of the American Society of Mechanical Engineers (ASME) in which ANI/MAELU is an active sponsor. This process assists ANI's technical resources in identifying where to look for insurance risk in an industry in which insurance exposure tends to be dynamic and nonactuarial. The process is an evolving one that also seeks to minimize the impact on insureds while maintaining a mutually agreeable risk tolerance.
Optimal reliability-based planning of experiments for POD curves
Soerensen, J.D.; Faber, M.H.; Kroon, I.B.
1995-12-31
Optimal planning of crack detection tests is considered. The tests are used to update the information on the reliability of inspection techniques modeled by probability of detection (P.O.D.) curves. It is shown how cost-optimal and reliability-based test plans can be obtained using First Order Reliability Methods in combination with life-cycle cost-optimal inspection and maintenance planning. The methodology is based on preposterior analyses from Bayesian decisions theory. An illustrative example is shown.
NASA Technical Reports Server (NTRS)
Leitner, Jesse
2016-01-01
This presentation conveys an approach for risk-based safety and mission assurance applied to cubesats. This presentation accompanies a NASA Goddard standard in development that provides guidance for building a mission success plan for cubesats based on the risk tolerance and resources available.
Research on particle swarm optimization algorithm based on optimal movement probability
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Han; He, Baofeng
2017-01-01
The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.
Cytogenetic bases for risk inference
Bender, M A
1980-01-01
Various enviromental pollutants are suspected of being capable of causing cancers or genetic defects even at low levels of exposure. In order to estimate risk from exposure to these pollutants, it would be useful to have some indicator of exposure. It is suggested that chromosomes are ideally suited for this purpose. Through the phenonema of chromosome aberrations and sister chromatid exchanges (SCE), chromosomes respond to virtually all carcinogens and mutagens. Aberrations and SCE are discussed in the context of their use as indicators of increased risk to health by chemical pollutants. (ACR)
NASA Astrophysics Data System (ADS)
Lee, Y. G.; Koo, J. H.
2015-12-01
Solar UV radiation in a wavelength range between 280 to 400 nm has both positive and negative influences on human body. Surface UV radiation is the main natural source of vitamin D, providing the promotion of bone and musculoskeletal health and reducing the risk of a number of cancers and other medical conditions. However, overexposure to surface UV radiation is significantly related with the majority of skin cancer, in addition other negative health effects such as sunburn, skin aging, and some forms of eye cataracts. Therefore, it is important to estimate the optimal UV exposure time, representing a balance between reducing negative health effects and maximizing sufficient vitamin D production. Previous studies calculated erythemal UV and vitamin-D UV from the measured and modelled spectral irradiances, respectively, by weighting CIE Erythema and Vitamin D3 generation functions (Kazantzidis et al., 2009; Fioletov et al., 2010). In particular, McKenzie et al. (2009) suggested the algorithm to estimate vitamin-D production UV from erythemal UV (or UV index) and determined the optimum conditions of UV exposure based on skin type Ⅱ according to the Fitzpatrick (1988). Recently, there are various demands for risks and benefits of surface UV radiation on public health over Korea, thus it is necessary to estimate optimal UV exposure time suitable to skin type of East Asians. This study examined the relationship between erythemally weighted UV (UVEry) and vitamin D weighted UV (UVVitD) over Korea during 2004-2012. The temporal variations of the ratio (UVVitD/UVEry) were also analyzed and the ratio as a function of UV index was applied in estimating the optimal UV exposure time. In summer with high surface UV radiation, short exposure time leaded to sufficient vitamin D and erythema and vice versa in winter. Thus, the balancing time in winter was enough to maximize UV benefits and minimize UV risks.
Risk based ASME Code requirements
Gore, B.F.; Vo, T.V.; Balkey, K.R.
1992-09-01
The objective of this ASME Research Task Force is to develop and to apply a methodology for incorporating quantitative risk analysis techniques into the definition of in-service inspection (ISI) programs for a wide range of industrial applications. An additional objective, directed towards the field of nuclear power generation, is ultimately to develop a recommendation for comprehensive revisions to the ISI requirements of Section XI of the ASME Boiler and Pressure Vessel Code. This will require development of a firm technical basis for such requirements, which does not presently exist. Several years of additional research will be required before this can be accomplished. A general methodology suitable for application to any industry has been defined and published. It has recently been refined and further developed during application to the field of nuclear power generation. In the nuclear application probabilistic risk assessment (PRA) techniques and information have been incorporated. With additional analysis, PRA information is used to determine the consequence of a component rupture (increased reactor core damage probability). A procedure has also been recommended for using the resulting quantified risk estimates to determine target component rupture probability values to be maintained by inspection activities. Structural risk and reliability analysis (SRRA) calculations are then used to determine characteristics which an inspection strategy must posess in order to maintain component rupture probabilities below target values. The methodology, results of example applications, and plans for future work are discussed.
Kennedy, W.E. Jr.
1992-06-01
The problems encountered during facility or land cleanup operations will provide challenges both to technology and regulatory agencies. Inevitably, the decisions of the federal agencies regulating cleanup activities have been controversial. The major dilemma facing government and industry is how to accomplish cleanup in a cost-effective manner while minimizing the risks to workers and the public.
Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru
2015-01-01
Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well. PMID:26421005
Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru
2015-01-01
Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well.
Optimizing Assurance: The Risk Regulation System in Relationships
ERIC Educational Resources Information Center
Murray, Sandra L.; Holmes, John G.; Collins, Nancy L.
2006-01-01
A model of risk regulation is proposed to explain how people balance the goal of seeking closeness to a romantic partner against the opposing goal of minimizing the likelihood and pain of rejection. The central premise is that confidence in a partner's positive regard and caring allows people to risk seeking dependence and connectedness. The risk…
Optimizing Assurance: The Risk Regulation System in Relationships
ERIC Educational Resources Information Center
Murray, Sandra L.; Holmes, John G.; Collins, Nancy L.
2006-01-01
A model of risk regulation is proposed to explain how people balance the goal of seeking closeness to a romantic partner against the opposing goal of minimizing the likelihood and pain of rejection. The central premise is that confidence in a partner's positive regard and caring allows people to risk seeking dependence and connectedness. The risk…
Optimal guidance law for cooperative attack of multiple missiles based on optimal control theory
NASA Astrophysics Data System (ADS)
Sun, Xiao; Xia, Yuanqing
2012-08-01
This article considers the problem of optimal guidance laws for cooperative attack of multiple missiles based on the optimal control theory. New guidance laws are presented such that multiple missiles attack a single target simultaneously. Simulation results show the effectiveness of the proposed algorithms.
Optimal separable bases and molecular collisions
Poirier, Lionel W.
1997-12-01
A new methodology is proposed for the efficient determination of Green`s functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, are problems of reduced dimensionality for most systems of physical interest. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. These distorted waves give rise to a Born series with optimized convergence properties. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic oscillator system. The primary interest however, is quantum reactive scattering in molecular systems. For numerical calculations, the use of distorted waves corresponds to numerical preconditioning. The new methodology therefore gives rise to an optimized preconditioning scheme for the efficient calculation of reactive and inelastic scattering amplitudes, especially at intermediate energies. This scheme is particularly suited to discrete variable representations (DVR`s) and iterative sparse matrix methods commonly employed in such calculations. State to state and cumulative reactive scattering results obtained via the optimized preconditioner are presented for the two-dimensional collinear H + H_{2} → H_{2} + H system. Computational time and memory requirements for this system are drastically reduced in comparison with other methods, and results are obtained for previously prohibitive energy regimes.
Optimal management of low-risk gestational trophoblastic neoplasia.
Goldstein, Donald P; Berkowitz, Ross S; Horowitz, Neil S
2015-01-01
Low-risk gestational trophoblastic neoplasia is a highly curable form of gestational trophoblastic neoplasia that arises largely from molar pregnancy and, on rare occasions, from other types of gestations. Risk is defined as the risk of developing drug resistance as determined by the WHO Prognostic Scoring System. All patients with non-metastatic disease and patients with risk scores <7 are considered to have low-risk disease. The sequential use of methotrexate and actinomycin D is associated with a complete remission rate of 80%. The most commonly utilized regimen for the treatment of patients resistant to single-agent chemotherapy is a multiagent regimen consisting of etoposide, methotrexate, actinomycin D, vincristine and cyclophosphamide. The measurement of human chorionic gonadotropin provides an accurate and reliable tumor marker for diagnosis, monitoring the effects of chemotherapy and follow-up to determine recurrence. Pregnancy is allowed after 12 months of normal serum tumor marker. Pregnancy outcomes are similar to those of normal population.
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2017-02-01
In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.
Prioritized Reliability-Risk$ Optimization for a Hydro-Thermal System CddHoward Consulting Ltd
NASA Astrophysics Data System (ADS)
Howard, C. D.; Howard, J. C.
2010-12-01
This paper provides a real-world example of a hydro-economic model for risk estimation and impact assessment. Ensembles of forecasted reservoir inflows were used to drive an integrated hydro-thermal two-reservoir stochastic long term economic optimization model. The optimization objective was to minimize long term thermal energy purchases by recommending the best current operating decisions within policies on long-term Risk (probabilistic cost). Risk was determined as a function of Reliability to meet forecasted energy load. Reservoir End of Month Rule Curves Reliability: Thermal Risk and Hydro Revenue
CFD Optimization on Network-Based Parallel Computer System
NASA Technical Reports Server (NTRS)
Cheung, Samson H.; VanDalsem, William (Technical Monitor)
1994-01-01
Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
CFD Optimization on Network-Based Parallel Computer System
NASA Technical Reports Server (NTRS)
Cheung, Samson H.; Holst, Terry L. (Technical Monitor)
1994-01-01
Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.
An approximation based global optimization strategy for structural synthesis
NASA Technical Reports Server (NTRS)
Sepulveda, A. E.; Schmit, L. A.
1991-01-01
A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.
Knowledge-based representations of risk beliefs.
Tonn, B E; Travis, C B; Goeltz, R T; Phillippi, R H
1990-03-01
Beliefs about risks associated with two risk agents, AIDS and toxic waste, are modeled using knowledge-based methods and elicited from subjects via interactive computer technology. A concept net is developed to organize subject responses concerning the consequences of the risk agents. It is found that death and adverse personal emotional and sociological consequences are most associated with AIDS. Toxic waste is most associated with environmental problems. These consequence profiles are quite dissimilar, although past work in risk perception would have judged the risk agents as being quite similar. Subjects frequently used causal semantics to represent their beliefs and "% of time" instead of "probability" to represent likelihoods. The news media is the most prevalent source of risk information although experiences of acquaintances appear more credible. The results suggest that "broadly based risk" communication may be ineffective because people differ in their conceptual representation of risk beliefs. In general, the knowledge-based approach to risk perception representation has great potential to increase our understanding of important risk topics.
Risk analysis based on hazards interactions
NASA Astrophysics Data System (ADS)
Rossi, Lauro; Rudari, Roberto; Trasforini, Eva; De Angeli, Silvia; Becker, Joost
2017-04-01
Despite an increasing need for open, transparent, and credible multi-hazard risk assessment methods, models, and tools, the availability of comprehensive risk information needed to inform disaster risk reduction is limited, and the level of interaction across hazards is not systematically analysed. Risk assessment methodologies for different hazards often produce risk metrics that are not comparable. Hazard interactions (consecutive occurrence two or more different events) are generally neglected, resulting in strongly underestimated risk assessment in the most exposed areas. This study presents cases of interaction between different hazards, showing how subsidence can affect coastal and river flood risk (Jakarta and Bandung, Indonesia) or how flood risk is modified after a seismic event (Italy). The analysis of well documented real study cases, based on a combination between Earth Observation and in-situ data, would serve as basis the formalisation of a multi-hazard methodology, identifying gaps and research frontiers. Multi-hazard risk analysis is performed through the RASOR platform (Rapid Analysis and Spatialisation Of Risk). A scenario-driven query system allow users to simulate future scenarios based on existing and assumed conditions, to compare with historical scenarios, and to model multi-hazard risk both before and during an event (www.rasor.eu).
Risk preferences: consequences for test and treatment thresholds and optimal cutoffs.
Felder, Stefan; Mayrhofer, Thomas
2014-01-01
Risk attitudes include risk aversion as well as higher-order risk preferences such as prudence and temperance. This article analyzes the effects of such preferences on medical test and treatment decisions, represented either by test and treatment thresholds or-when the test result is not given-by optimal cutoff values for diagnostic tests. For a risk-averse decision maker, effective treatment is a risk-reducing strategy since it prevents the low health outcome of forgoing treatment in the sick state. Compared with risk neutrality, risk aversion thus lowers both the test and the treatment threshold and decreases the optimal test cutoff value. Risk vulnerability, which combines risk aversion, prudence, and temperance, is relevant if there is a comorbidity risk: thresholds and optimal cutoff values decrease even more. Since common utility functions imply risk vulnerability, our findings suggest that diagnostics in low prevalence settings (e.g., screening) may be considered more beneficial when risk preferences are taken into account.
Shape optimization for contact problems based on isogeometric analysis
NASA Astrophysics Data System (ADS)
Horn, Benjamin; Ulbrich, Stefan
2016-08-01
We consider the shape optimization for mechanical connectors. To avoid the gap between the representation in CAD systems and the finite element simulation used by mathematical optimization, we choose an isogeometric approach for the solution of the contact problem within the optimization method. This leads to a shape optimization problem governed by an elastic contact problem. We handle the contact conditions using the mortar method and solve the resulting contact problem with a semismooth Newton method. The optimization problem is nonconvex and nonsmooth due to the contact conditions. To reduce the number of simulations, we use a derivative based optimization method. With the adjoint approach the design derivatives can be calculated efficiently. The resulting optimization problem is solved with a modified Bundle Trust Region algorithm.
Risk-Based Explosive Safety Analysis
2016-11-30
safety siting of energetic liquids and propellants can be greatly aided by the use of risk- based methodologies. The low probability of exposed...of energetic liquids and propellants can be greatly aided by the use of risk- based methodologies. The low probability of exposed personnel and the... based analysis of scenario 2 would likely determine that the hazard of death or injury to any single person is low due to the separation distance
[Radiotherapy quality and risk manager role optimization in 2017].
Ponsard, N; Brusadin, G; Schick, U
2017-10-01
The quality and risk manager works in a regulated framework, which delimits its missions. Nevertheless, the variety among the centers generates heterogeneous situations regarding the positioning and the range of action. A well-defined framework is needed in order to ratify the legitimacy and the recognition of quality and risk manager's main function. Copyright © 2017 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Optimal trajectories based on linear equations
NASA Technical Reports Server (NTRS)
Carter, Thomas E.
1990-01-01
The Principal results of a recent theory of fuel optimal space trajectories for linear differential equations are presented. Both impulsive and bounded-thrust problems are treated. A new form of the Lawden Primer vector is found that is identical for both problems. For this reason, starting iteratives from the solution of the impulsive problem are highly effective in the solution of the two-point boundary-value problem associated with bounded thrust. These results were applied to the problem of fuel optimal maneuvers of a spacecraft near a satellite in circular orbit using the Clohessy-Wiltshire equations. For this case two-point boundary-value problems were solved using a microcomputer, and optimal trajectory shapes displayed. The results of this theory can also be applied if the satellite is in an arbitrary Keplerian orbit through the use of the Tschauner-Hempel equations. A new form of the solution of these equations has been found that is identical for elliptical, parabolic, and hyperbolic orbits except in the way that a certain integral is evaluated. For elliptical orbits this integral is evaluated through the use of the eccentric anomaly. An analogous evaluation is performed for hyperbolic orbits.
Copenhaver, Michael M; Lee, I-Ching
2006-11-01
Research on behavioral HIV risk reduction interventions for injection drug users (IDUs) has focused on primary outcomes (e.g., reduced injection drug use, increased condom use) but has not fully examined the respective roles played by intervention components on these primary outcomes. In this paper, we present a structural equation modeling (SEM) approach in which we specify the causal pathways leading from theory-based intervention components to risk reduction outcomes among a sample of primarily IDUs (n = 226) participating in an inner-city community-based methadone maintenance program. Similar pathways were found leading to both drug- and sexual-related risk reduction outcomes. Findings suggest the importance of targeting participants' risk reduction motivation and behavioral skills versus employing more passive informational strategies. Findings also indicate that our intervention may be optimized by focusing more on participants' risk reduction motivation within the sexual-related content and placing equivalent emphasis on participants' risk reduction knowledge, motivation, and behavioral skills within the drug-related content. By quantifying the specific linkage between intervention components and risk reduction outcomes, our SEM findings offer empirical guidance for optimizing this intervention. This strategy may also serve as a useful theory- and data-driven means to inform the refinement of other behavioral interventions.
Mice can count and optimize count-based decisions.
Çavdaroğlu, Bilgehan; Balcı, Fuat
2016-06-01
Previous studies showed that rats and pigeons can count their responses, and the resultant count-based judgments exhibit the scalar property (also known as Weber's Law), a psychophysical property that also characterizes interval-timing behavior. Animals were found to take a nearly normative account of these well-established endogenous uncertainty characteristics in their time-based decision-making. On the other hand, no study has yet tested the implications of scalar property of numerosity representations for reward-rate maximization in count-based decision-making. The current study tested mice on a task that required them to press one lever for a minimum number of times before pressing the second lever to collect the armed reward (fixed consecutive number schedule, FCN). Fewer than necessary number of responses reset the response count without reinforcement, whereas emitting responses at least for the minimum number of times reset the response counter with reinforcement. Each mouse was tested with three different FCN schedules (FCN10, FCN20, FCN40). The number of responses emitted on the first lever before pressing the second lever constituted the main unit of analysis. Our findings for the first time showed that mice count their responses with scalar property. We then defined the reward-rate maximizing numerical decision strategies in this task based on the subject-based estimates of the endogenous counting uncertainty. Our results showed that mice learn to maximize the reward-rate by incorporating the uncertainty in their numerosity judgments into their count-based decisions. Our findings extend the scope of optimal temporal risk-assessment to the domain of count-based decision-making.
Gao, Y; Liu, B; Kalra, M; Caracappa, P; Liu, T; Li, X; Xu, X
2015-06-15
Purpose: X-rays from CT scans can increase cancer risk to patients. Lifetime Attributable Risk of Cancer Incidence for adult patients has been investigated and shown to decrease as patient age. However, a new risk model shows an increasing risk trend for several radiosensitive organs for middle age patients. This study investigates the feasibility of a general method for optimizing tube current modulation (TCM) functions to minimize risk by reducing radiation dose to radiosensitive organs of patients. Methods: Organ-based TCM has been investigated in literature for eye lens dose and breast dose. Adopting the concept in organ-based TCM, this study seeks to find an optimized tube current for minimal total risk to breasts and lungs by reducing dose to these organs. The contributions of each CT view to organ dose are determined through simulations of CT scan view-by-view using a GPU-based fast Monte Carlo code, ARCHER. A Linear Programming problem is established for tube current optimization, with Monte Carlo results as weighting factors at each view. A pre-determined dose is used as upper dose boundary, and tube current of each view is optimized to minimize the total risk. Results: An optimized tube current is found to minimize the total risk of lungs and breasts: compared to fixed current, the risk is reduced by 13%, with breast dose reduced by 38% and lung dose reduced by 7%. The average tube current is maintained during optimization to maintain image quality. In addition, dose to other organs in chest region is slightly affected, with relative change in dose smaller than 10%. Conclusion: Optimized tube current plans can be generated to minimize cancer risk to lungs and breasts while maintaining image quality. In the future, various risk models and greater number of projections per rotation will be simulated on phantoms of different gender and age. National Institutes of Health R01EB015478.
Drukker, C A; Nijenhuis, M V; Bueno-de-Mesquita, J M; Retèl, V P; van Harten, W H; van Tinteren, H; Wesseling, J; Schmidt, M K; Van't Veer, L J; Sonke, G S; Rutgers, E J T; van de Vijver, M J; Linn, S C
2014-06-01
Clinical guidelines for breast cancer treatment differ in their selection of patients at a high risk of recurrence who are eligible to receive adjuvant systemic treatment (AST). The 70-gene signature is a molecular tool to better guide AST decisions. The aim of this study was to evaluate whether adding the 70-gene signature to clinical risk prediction algorithms can optimize outcome prediction and consequently treatment decisions in early stage, node-negative breast cancer patients. A 70-gene signature was available for 427 patients participating in the RASTER study (cT1-3N0M0). Median follow-up was 61.6 months. Based on 5-year distant-recurrence free interval (DRFI) probabilities survival areas under the curve (AUC) were calculated and compared for risk estimations based on the six clinical risk prediction algorithms: Adjuvant! Online (AOL), Nottingham Prognostic Index (NPI), St. Gallen (2003), the Dutch National guidelines (CBO 2004 and NABON 2012), and PREDICT plus. Also, survival AUC were calculated after adding the 70-gene signature to these clinical risk estimations. Systemically untreated patients with a high clinical risk estimation but a low risk 70-gene signature had an excellent 5-year DRFI varying between 97.1 and 100 %, depending on the clinical risk prediction algorithms used in the comparison. The best risk estimation was obtained in this cohort by adding the 70-gene signature to CBO 2012 (AUC: 0.644) and PREDICT (AUC: 0.662). Clinical risk estimations by all clinical algorithms improved by adding the 70-gene signature. Patients with a low risk 70-gene signature have an excellent survival, independent of their clinical risk estimation. Adding the 70-gene signature to clinical risk prediction algorithms improves risk estimations and therefore might improve the identification of early stage node-negative breast cancer patients for whom AST has limited value. In this cohort, the PREDICT plus tool in combination with the 70-gene signature provided the
Risk based inspection for atmospheric storage tank
NASA Astrophysics Data System (ADS)
Nugroho, Agus; Haryadi, Gunawan Dwi; Ismail, Rifky; Kim, Seon Jin
2016-04-01
Corrosion is an attack that occurs on a metallic material as a result of environment's reaction.Thus, it causes atmospheric storage tank's leakage, material loss, environmental pollution, equipment failure and affects the age of process equipment then finally financial damage. Corrosion risk measurement becomesa vital part of Asset Management at the plant for operating any aging asset.This paper provides six case studies dealing with high speed diesel atmospheric storage tank parts at a power plant. A summary of the basic principles and procedures of corrosion risk analysis and RBI applicable to the Process Industries were discussed prior to the study. Semi quantitative method based onAPI 58I Base-Resource Document was employed. The risk associated with corrosion on the equipment in terms of its likelihood and its consequences were discussed. The corrosion risk analysis outcome used to formulate Risk Based Inspection (RBI) method that should be a part of the atmospheric storage tank operation at the plant. RBI gives more concern to inspection resources which are mostly on `High Risk' and `Medium Risk' criteria and less on `Low Risk' shell. Risk categories of the evaluated equipment were illustrated through case study analysis outcome.
Cost-Benefit Analysis for Optimization of Risk Protection Under Budget Constraints.
Špačková, Olga; Straub, Daniel
2015-05-01
Cost-benefit analysis (CBA) is commonly applied as a tool for deciding on risk protection. With CBA, one can identify risk mitigation strategies that lead to an optimal tradeoff between the costs of the mitigation measures and the achieved risk reduction. In practical applications of CBA, the strategies are typically evaluated through efficiency indicators such as the benefit-cost ratio (BCR) and the marginal cost (MC) criterion. In many of these applications, the BCR is not consistently defined, which, as we demonstrate in this article, can lead to the identification of suboptimal solutions. This is of particular relevance when the overall budget for risk reduction measures is limited and an optimal allocation of resources among different subsystems is necessary. We show that this problem can be formulated as a hierarchical decision problem, where the general rules and decisions on the available budget are made at a central level (e.g., central government agency, top management), whereas the decisions on the specific measures are made at the subsystem level (e.g., local communities, company division). It is shown that the MC criterion provides optimal solutions in such hierarchical optimization. Since most practical applications only include a discrete set of possible risk protection measures, the MC criterion is extended to this situation. The findings are illustrated through a hypothetical numerical example. This study was prepared as part of our work on the optimal management of natural hazard risks, but its conclusions also apply to other fields of risk management.
Threat Based Risk Assessment for Enterprise Networks
2016-02-15
Threat-Based Risk Assessment for Enterprise Networks Richard P. Lippmann and James F. Riordan Protecting enterprise networks requires... enterprises to make sure that risks from all current threats are addressed, many organizations adopt a best-practices approach by installing popular...effective when performed by skilled security practitioners who understand an enterprise network, can enumerate all threats and their likelihoods, and
NASA Astrophysics Data System (ADS)
Cai, Lanlan; Li, Peng; Luo, Qi; Zhai, Pengcheng; Zhang, Qingjie
2017-01-01
As no single thermoelectric material has presented a high figure-of-merit (ZT) over a very wide temperature range, segmented thermoelectric generators (STEGs), where the p- and n-legs are formed of different thermoelectric material segments joined in series, have been developed to improve the performance of thermoelectric generators. A crucial but difficult problem in a STEG design is to determine the optimal values of the geometrical parameters, like the relative lengths of each segment and the cross-sectional area ratio of the n- and p-legs. Herein, a multi-parameter and nonlinear optimization method, based on the Improved Powell Algorithm in conjunction with the discrete numerical model, was implemented to solve the STEG's geometrical optimization problem. The multi-parameter optimal results were validated by comparison with the optimal outcomes obtained from the single-parameter optimization method. Finally, the effect of the hot- and cold-junction temperatures on the geometry optimization was investigated. Results show that the optimal geometry parameters for maximizing the specific output power of a STEG are different from those for maximizing the conversion efficiency. Data also suggest that the optimal geometry parameters and the interfacial temperatures of the adjacent segments optimized for maximum specific output power or conversion efficiency vary with changing hot- and cold-junction temperatures. Through the geometry optimization, the CoSb3/Bi2Te3-based STEG can obtain a maximum specific output power up to 1725.3 W/kg and a maximum efficiency of 13.4% when operating at a hot-junction temperature of 823 K and a cold-junction temperature of 298 K.
NASA Astrophysics Data System (ADS)
Cai, Lanlan; Li, Peng; Luo, Qi; Zhai, Pengcheng; Zhang, Qingjie
2017-03-01
As no single thermoelectric material has presented a high figure-of-merit (ZT) over a very wide temperature range, segmented thermoelectric generators (STEGs), where the p- and n-legs are formed of different thermoelectric material segments joined in series, have been developed to improve the performance of thermoelectric generators. A crucial but difficult problem in a STEG design is to determine the optimal values of the geometrical parameters, like the relative lengths of each segment and the cross-sectional area ratio of the n- and p-legs. Herein, a multi-parameter and nonlinear optimization method, based on the Improved Powell Algorithm in conjunction with the discrete numerical model, was implemented to solve the STEG's geometrical optimization problem. The multi-parameter optimal results were validated by comparison with the optimal outcomes obtained from the single-parameter optimization method. Finally, the effect of the hot- and cold-junction temperatures on the geometry optimization was investigated. Results show that the optimal geometry parameters for maximizing the specific output power of a STEG are different from those for maximizing the conversion efficiency. Data also suggest that the optimal geometry parameters and the interfacial temperatures of the adjacent segments optimized for maximum specific output power or conversion efficiency vary with changing hot- and cold-junction temperatures. Through the geometry optimization, the CoSb3/Bi2Te3-based STEG can obtain a maximum specific output power up to 1725.3 W/kg and a maximum efficiency of 13.4% when operating at a hot-junction temperature of 823 K and a cold-junction temperature of 298 K.
Performance investigation of multigrid optimization for DNS-based optimal control problems
NASA Astrophysics Data System (ADS)
Nita, Cornelia; Vandewalle, Stefan; Meyers, Johan
2016-11-01
Optimal control theory in Direct Numerical Simulation (DNS) or Large-Eddy Simulation (LES) of turbulent flow involves large computational cost and memory overhead for the optimization of the controls. In this context, the minimization of the cost functional is typically achieved by employing gradient-based iterative methods such as quasi-Newton, truncated Newton or non-linear conjugate gradient. In the current work, we investigate the multigrid optimization strategy (MGOpt) in order to speed up the convergence of the damped L-BFGS algorithm for DNS-based optimal control problems. The method consists in a hierarchy of optimization problems defined on different representation levels aiming to reduce the computational resources associated with the cost functional improvement on the finest level. We examine the MGOpt efficiency for the optimization of an internal volume force distribution with the goal of reducing the turbulent kinetic energy or increasing the energy extraction in a turbulent wall-bounded flow; problems that are respectively related to drag reduction in boundary layers, or energy extraction in large wind farms. Results indicate that in some cases the multigrid optimization method requires up to a factor two less DNS and adjoint DNS than single-grid damped L-BFGS. The authors acknowledge support from OPTEC (OPTimization in Engineering Center of Excellence, KU Leuven, Grant No PFV/10/002).
Optimizing solubility: kinetic versus thermodynamic solubility temptations and risks.
Saal, Christoph; Petereit, Anna Christine
2012-10-09
The aim of this study was to assess the usefulness of kinetic and thermodynamic solubility data in guiding medicinal chemistry during lead optimization. The solubility of 465 research compounds was measured using a kinetic and a thermodynamic solubility assay. In the thermodynamic assay, polarized-light microscopy was used to investigate whether the result referred to the crystalline or to the amorphous compound. From the comparison of kinetic and thermodynamic solubility data it was noted that kinetic solubility measurements frequently yielded results which show considerably higher solubility compared to thermodynamic solubility. This observation is ascribed to the fact that a kinetic solubility assay typically delivers results which refer to the amorphous compound. In contrast, results from thermodynamic solubility determinations more frequently refer to a crystalline phase. Accordingly, thermodynamic solubility data--especially when used together with an assessment of the solid state form--are deemed to be more useful in guiding solubility optimization for research compounds.
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.
Design Tool Using a New Optimization Method Based on a Stochastic Process
NASA Astrophysics Data System (ADS)
Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio
Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.
Optimization-Based Management of Energy Systems
2011-05-11
initial cost with renewable usage constraints NC CO OK NY TX Grid Yes, unlimited Yes, unlimited Yes, unlimited Yes, unlimited Yes, unlimited Solar PV (KW...35 MW 0 0 0 20 MW Wind turbines(kW) 65 MW 70 MW 65 MW 55 MW 50 MW CHP (microturbines+absChiller) 5 MW microturbines 17.5 MW microturbines 35 MW...optimized 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10 6 Total Cost Grid Energy Cost Grid Demand Cost Heating Cost CHP Natural Gas Cost Diesel Cost Annual Cost
Optimization-based Dynamic Human Walking Prediction
2007-01-01
9(1), 1997, p 10-17. 3. Chevallereau, C. and Aousin, Y. Optimal reference trajectories for walking and running of a biped robot. Robotica , v 19...28, 2001, Arlington, Virginia. 13. Mu, XP. and Wu, Q. Synthesis of a complete sagittal gait cycle for a five-link biped robot. Robotica , v 21...gait cycles of a biped robot. Robotica , v 21(2), 2003, p 199-210. 16. Sardain, P. and Bessonnet, G. Forces acting on a biped robot. Center of
Risk-Based Comparison of Carbon Capture Technologies
Engel, David W.; Dalton, Angela C.; Dale, Crystal; Jones, Edward
2013-05-01
In this paper, we describe an integrated probabilistic risk assessment methodological framework and a decision-support tool suite for implementing systematic comparisons of competing carbon capture technologies. Culminating from a collaborative effort among national laboratories under the Carbon Capture Simulation Initiative (CCSI), the risk assessment framework and the decision-support tool suite encapsulate three interconnected probabilistic modeling and simulation components. The technology readiness level (TRL) assessment component identifies specific scientific and engineering targets required by each readiness level and applies probabilistic estimation techniques to calculate the likelihood of graded as well as nonlinear advancement in technology maturity. The technical risk assessment component focuses on identifying and quantifying risk contributors, especially stochastic distributions for significant risk contributors, performing scenario-based risk analysis, and integrating with carbon capture process model simulations and optimization. The financial risk component estimates the long-term return on investment based on energy retail pricing, production cost, operating and power replacement cost, plan construction and retrofit expenses, and potential tax relief, expressed probabilistically as the net present value distributions over various forecast horizons.
Optimal fractional order PID design via Tabu Search based algorithm.
Ateş, Abdullah; Yeroglu, Celaleddin
2016-01-01
This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method.
Optimization of FPGA-based Moore FSM
NASA Astrophysics Data System (ADS)
Barkalov, Aleksander; Titarenko, Larysa; Chmielewski, Sławomir
2014-10-01
A metod is proposed for hardware reduction in FPGA-based Moore FSM. It is based on using two sources of codes. It reduces the number of LUTs in the FSM circuit. The results of investigations are shown.
Adjoint-based optimization of fish swimming gaits
NASA Astrophysics Data System (ADS)
Floryan, Daniel; Rowley, Clarence W.; Smits, Alexander J.
2016-11-01
We study a simplified model of fish swimming, namely a flat plate periodically pitching about its leading edge. Using gradient-based optimization, we seek periodic gaits that are optimal in regards to a particular objective (e.g. maximal thrust). The two-dimensional immersed boundary projection method is used to investigate the flow states, and its adjoint formulation is used to efficiently calculate the gradient of the objective function needed for optimization. The adjoint method also provides sensitivity information, which may be used to elucidate the physics responsible for optimality. Supported under ONR MURI Grants N00014-14-1-0533, Program Manager Bob Brizzolara.
Optimization of agricultural field workability predictions for improved risk management
USDA-ARS?s Scientific Manuscript database
Risks introduced by weather variability are key considerations in agricultural production. The sensitivity of agriculture to weather variability is of special concern in the face of climate change. In particular, the availability of workable days is an important consideration in agricultural practic...
Feng, Qiang; Chen, Yiran; Sun, Bo; Li, Songjie
2014-01-01
An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.
Chen, Yiran; Sun, Bo; Li, Songjie
2014-01-01
An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success. PMID:24892046
Optimization and analysis of a CFJ-airfoil using adaptive meta-model based design optimization
NASA Astrophysics Data System (ADS)
Whitlock, Michael D.
Although strong potential for Co-Flow Jet (CFJ) flow separation control system has been demonstrated in existing literature, there has been little effort applied towards the optimization of the design for a given application. The high dimensional design space makes any optimization computationally intensive. This work presents the optimization of a CFJ airfoil as applied to a low Reynolds Number regimen using meta-model based design optimization (MBDO). The approach consists of computational fluid dynamics (CFD) analysis coupled with a surrogate model derived using Kriging. A genetic algorithm (GA) is then used to perform optimization on the efficient surrogate model. MBDO was shown to be an effective and efficient approach to solving the CFJ design problem. The final solution set was found to decrease drag by 100% while increasing lift by 42%. When validated, the final solution was found to be within one standard deviation of the CFD model it was representing.
Ant colony optimization-based firewall anomaly mitigation engine.
Penmatsa, Ravi Kiran Varma; Vatsavayi, Valli Kumari; Samayamantula, Srinivas Kumar
2016-01-01
A firewall is the most essential component of network perimeter security. Due to human error and the involvement of multiple administrators in configuring firewall rules, there exist common anomalies in firewall rulesets such as Shadowing, Generalization, Correlation, and Redundancy. There is a need for research on efficient ways of resolving such anomalies. The challenge is also to see that the reordered or resolved ruleset conforms to the organization's framed security policy. This study proposes an ant colony optimization (ACO)-based anomaly resolution and reordering of firewall rules called ACO-based firewall anomaly mitigation engine. Modified strategies are also introduced to automatically detect these anomalies and to minimize manual intervention of the administrator. Furthermore, an adaptive reordering strategy is proposed to aid faster reordering when a new rule is appended. The proposed approach was tested with different firewall policy sets. The results were found to be promising in terms of the number of conflicts resolved, with minimal availability loss and marginal security risk. This work demonstrated the application of a metaheuristic search technique, ACO, in improving the performance of a packet-filter firewall with respect to mitigating anomalies in the rules, and at the same time demonstrated conformance to the security policy.
Optimal Predator Risk Assessment by the Sonar-Jamming Arctiine Moth Bertholdia trigona
Corcoran, Aaron J.; Wagner, Ryan D.; Conner, William E.
2013-01-01
Nearly all animals face a tradeoff between seeking food and mates and avoiding predation. Optimal escape theory holds that an animal confronted with a predator should only flee when benefits of flight (increased survival) outweigh the costs (energetic costs, lost foraging time, etc.). We propose a model for prey risk assessment based on the predator's stage of attack. Risk level should increase rapidly from when the predator detects the prey to when it commits to the attack. We tested this hypothesis using a predator – the echolocating bat – whose active biosonar reveals its stage of attack. We used a prey defense – clicking used for sonar jamming by the tiger moth Bertholdia trigona– that can be readily studied in the field and laboratory and is enacted simultaneously with evasive flight. We predicted that prey employ defenses soon after being detected and targeted, and that prey defensive thresholds discriminate between legitimate predatory threats and false threats where a nearby prey is attacked. Laboratory and field experiments using playbacks of ultrasound signals and naturally behaving bats, respectively, confirmed our predictions. Moths clicked soon after bats detected and targeted them. Also, B. trigona clicking thresholds closely matched predicted optimal thresholds for discriminating legitimate and false predator threats for bats using search and approach phase echolocation – the period when bats are searching for and assessing prey. To our knowledge, this is the first quantitative study to correlate the sensory stimuli that trigger defensive behaviors with measurements of signals provided by predators during natural attacks in the field. We propose theoretical models for explaining prey risk assessment depending on the availability of cues that reveal a predator's stage of attack. PMID:23671686
Trajectory optimization based on differential inclusion
NASA Technical Reports Server (NTRS)
Seywald, Hans
1993-01-01
A method for generating finite-dimensional approximations to the solutions of optimal control problems is introduced. By employing a description of the dynamical system in terms of its attainable sets in favor of using differential equations, the controls are completely eliminated from the system model. Besides reducing the dimensionality of the discretized problem compared to state-of-the-art collocation methods, this approach also alleviates the search for initial guesses from where standard gradient search methods are able to converge. The mechanics of the new method are illustrated on a simple double integrator problem. The performance of the new algorithm is demonstrated on a 1-D rocket ascent problem ('Goddard Problem') in presence of a dynamic pressure constraint.
Power Grid De-icing Optimal Plan Based on Fractional Sieve Method
NASA Astrophysics Data System (ADS)
Xie, Guangbin; Lin, Meihan; Li, Huaqiang
2017-05-01
Aiming at the problem that the reliability of system was reduced and the security risk was increased during the DC de-icing period, a decision-making model based on the fractional sieve method was proposed. This model introduced risk assessment theory, and took into account the comprehensive failure probability model of protection action and ice cover. Considering the de-icing condition, a DC de-icing strategy model, which was with the objective function of minimizing the load of shedding and minimizing the operating risk, was proposed. The objective function was optimized by particle swarm optimization algorithm and fractional sieve method. The simulative results of IEEE30-bus system indicated that the load loss caused by de-icing and the operational risk of the system could be effectively reduced by the proposed model. It provided a reference for power department to make a de-icing plan.
Image quality optimization using an x-ray spectra model-based optimization method
NASA Astrophysics Data System (ADS)
Gordon, Clarence L., III
2000-04-01
Several x-ray parameters must be optimized to deliver exceptional fluoroscopic and radiographic x-ray Image Quality (IQ) for the large variety of clinical procedures and patient sizes performed on a cardiac/vascular x-ray system. The optimal choice varies as a function of the objective of the medical exam, the patient size, local regulatory requirements, and the operational range of the system. As a result, many distinct combinations are required to successfully operate the x-ray system and meet the clinical imaging requirements. Presented here, is a new, configurable and automatic method to perform x-ray technique and IQ optimization using an x-ray spectral model based simulation of the x-ray generation and detection system. This method incorporates many aspects/requirements of the clinical environment, and a complete description of the specific x-ray system. First, the algorithm requires specific inputs: clinically relevant performance objectives, system hardware configuration, and system operational range. Second, the optimization is performed for a Primary Optimization Strategy versus patient thickness, e.g. maximum contrast. Finally, in the case where there are multiple operating points, which meet the Primary Optimization Strategy, a Secondary Optimization Strategy, e.g. to minimize patient dose, is utilized to determine the final set of optimal x-ray techniques.
Performance optimization of web-based medical simulation.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2013-01-01
This paper presents a technique for performance optimization of multimodal interactive web-based medical simulation. A web-based simulation framework is promising for easy access and wide dissemination of medical simulation. However, the real-time performance of the simulation highly depends on hardware capability on the client side. Providing consistent simulation in different hardware is critical for reliable medical simulation. This paper proposes a non-linear mixed integer programming model to optimize the performance of visualization and physics computation while considering hardware capability and application specific constraints. The optimization model identifies and parameterizes the rendering and computing capabilities of the client hardware using an exploratory proxy code. The parameters are utilized to determine the optimized simulation conditions including texture sizes, mesh sizes and canvas resolution. The test results show that the optimization model not only achieves a desired frame per second but also resolves visual artifacts due to low performance hardware.
Stochastic structural and reliability based optimization of tuned mass damper
NASA Astrophysics Data System (ADS)
Mrabet, E.; Guedri, M.; Ichchou, M. N.; Ghanmi, S.
2015-08-01
The purpose of the current work is to present and discuss a technique for optimizing the parameters of a vibration absorber in the presence of uncertain bounded structural parameters. The technique used in the optimization is an interval extension based on a Taylor expansion of the objective function. The technique permits the transformation of the problem, initially non-deterministic, into two independents deterministic sub-problems. Two optimization strategies are considered: the Stochastic Structural Optimization (SSO) and the Reliability Based Optimization (RBO). It has been demonstrated through two different structures that the technique is valid for the SSO problem, even for high levels of uncertainties and it is less suitable for the RBO problem, especially when considering high levels of uncertainties.
Optimization of a photovoltaic pumping system based on the optimal control theory
Betka, A.; Attali, A.
2010-07-15
This paper suggests how an optimal operation of a photovoltaic pumping system based on an induction motor driving a centrifugal pump can be realized. The optimization problem consists in maximizing the daily pumped water quantity via the optimization of the motor efficiency for every operation point. The proposed structure allows at the same time the minimization the machine losses, the field oriented control and the maximum power tracking of the photovoltaic array. This will be attained based on multi-input and multi-output optimal regulator theory. The effectiveness of the proposed algorithm is described by simulation and the obtained results are compared to those of a system working with a constant air gap flux. (author)
Risk Classification and Risk-based Safety and Mission Assurance
NASA Technical Reports Server (NTRS)
Leitner, Jesse A.
2014-01-01
Recent activities to revamp and emphasize the need to streamline processes and activities for Class D missions across the agency have led to various interpretations of Class D, including the lumping of a variety of low-cost projects into Class D. Sometimes terms such as Class D minus are used. In this presentation, mission risk classifications will be traced to official requirements and definitions as a measure to ensure that projects and programs align with the guidance and requirements that are commensurate for their defined risk posture. As part of this, the full suite of risk classifications, formal and informal will be defined, followed by an introduction to the new GPR 8705.4 that is currently under review.GPR 8705.4 lays out guidance for the mission success activities performed at the Classes A-D for NPR 7120.5 projects as well as for projects not under NPR 7120.5. Furthermore, the trends in stepping from Class A into higher risk posture classifications will be discussed. The talk will conclude with a discussion about risk-based safety and mission assuranceat GSFC.
Bandari, Daniel S; Sternaman, Debora; Chan, Theodore; Prostko, Chris R; Sapir, Tamar
2012-01-01
Multiple sclerosis (MS) is a complex, chronic, and often disablingneurological disease. Despite the recent incorporation of new treatmentapproaches early in the disease course, care providers still face difficultdecisions as to which therapy will lead to optimal outcomes and whento initiate or escalate therapies. Such decisions require proper assessmentof relative risks, costs, and benefits of new and emerging therapies, as wellas addressing challenges with adherence to achieve optimal managementand outcomes.At the 24th Annual Meeting Expo of the Academy of Managed CarePharmacy (AMCP), held in San Francisco on April 18, 2012, a 4-hour activitytitled "Analyzing and Applying the Evidence to Improve Cost-Benefit andRisk-Benefit Outcomes in Multiple Sclerosis" was conducted in associationwith AMCP's Continuing Professional Education Partner Program (CPEPP).The practicum, led by the primary authors of this supplement, featureddidactic presentations, a roundtable session, and an expert panel discussiondetailing research evidence, ideas, and discussion topics central to MSand its applications to managed care. To review (a) recent advances in MS management, (b) strategiesto optimize the use of disease-modifying therapies for MS, (c) costs ofcurrent MS therapies, (d) strategies to promote adherence and complianceto disease-modifying therapies, and (e) potential strategies for managedcare organizations to improve care of their MS patient populations and optimizeclinical and economic outcomes. Advances in magnetic resonance imaging and newer therapieshave allowed earlier diagnosis and reduction of relapses, reduction in progressionof disability, and reduction in total cost of care in the long term.Yet, even with the incorporation of new disease-modifying therapies intothe treatment armamentarium of MS, challenges remain for patients, providers,caregivers, and managed care organizations as they have to makeinformed decisions based on the properties, risks, costs, and benefits
Ragas, Ad M J; Huijbregts, Mark A J; Henning-de Jong, Irmgard; Leuven, Rob S E W
2009-01-01
Environmental risk assessment is typically uncertain due to different perceptions of the risk problem and limited knowledge about the physical, chemical, and biological processes underlying the risk. The present paper provides a systematic overview of the implications of different types of uncertainty for risk management, with a focus on risk-based management of river basins. Three different types of uncertainty are distinguished: 1) problem definition uncertainty, 2) true uncertainty, and 3) variability. Methods to quantify and describe these types of uncertainty are discussed and illustrated in 4 case studies. The case studies demonstrate that explicit regulation of uncertainty can improve risk management (e.g., by identification of the most effective risk reduction measures, optimization of the use of resources, and improvement of the decision-making process). It is concluded that the involvement of nongovernmental actors as prescribed by the European Union Water Framework Directive (WFD) provides challenging opportunities to address problem definition uncertainty and those forms of true uncertainty that are difficult to quantify. However, the WFD guidelines for derivation and application of environmental quality standards could be improved by the introduction of a probabilistic approach to deal with true uncertainty and a better scientific basis for regulation of variability.
Data mining and tree-based optimization
Grossman, R.; Bodek, H.; Northcutt, D.; Poor, V.
1996-12-31
Consider a large collection of objects, each of which has a large number of attributes of several different sorts. We assume that there are data attributes representing data, attributes which are to be statistically estimated or predicted from these, and attributes which can be controlled or set. A motivating example is to assign a credit score to a credit card prospect indicating the likelihood that the prospect will make credit card payments and then to set a credit limit for each prospect in such a way as to maximize the over-all expected revenue from the entire collection of prospects. In the terminology above, the credit score is called a predictive attribute and the credit limit a control attribute. The methodology we describe in the paper uses data mining to provide more accurate estimates of the predictive attributes and to provide more optimal settings of the control attributes. We briefly describe how to parallelize these computations. We also briefly comment on some of data management issues which arise for these types of problems in practice. We propose using object warehouses to provide low overhead, high performance access to large collections of objects as an underlying foundation for our data mining algorithms.
Optimal diabatic states based on solvation parameters
NASA Astrophysics Data System (ADS)
Alguire, Ethan; Subotnik, Joseph E.
2012-11-01
A new method for obtaining diabatic electronic states of a molecular system in a condensed environment is proposed and evaluated. This technique, which we denote as Edmiston-Ruedenberg (ER)-ɛ diabatization, forms diabatic states as a linear combination of adiabatic states by minimizing an approximation to the total coupling between states in a medium with temperature T and with a characteristic Pekar factor C. ER-ɛ diabatization represents an improvement upon previous localized diabatization methods for two reasons: first, it is sensitive to the energy separation between adiabatic states, thus accounting for fluctuations in energy and effectively preventing over-mixing. Second, it responds to the strength of system-solvent interactions via parameters for the dielectric constant and temperature of the medium, which is physically reasonable. Here, we apply the ER-ɛ technique to both intramolecular and intermolecular excitation energy transfer systems. We find that ER-ɛ diabatic states satisfy three important properties: (1) they have small derivative couplings everywhere; (2) they have small diabatic couplings at avoided crossings, and (3) they have negligible diabatic couplings everywhere else. As such, ER-ɛ states are good candidates for so-called "optimal diabatic states."
Optimizing ring-based CSR sources
Byrd, J.M.; De Santis, S.; Hao, Z.; Martin, M.C.; Munson, D.V.; Li, D.; Nis himura, H.; Robin, D.S.; Sannibale, F.; Schlueter, R.D.; Schoenlein, R.; Jung, J.Y.; Venturini, M.; Wan, W.; Zholents, A.A.; Zolotorev, M.
2004-01-01
Coherent synchrotron radiation (CSR) is a fascinating phenomenon recently observed in electron storage rings and shows tremendous promise as a high power source of radiation at terahertz frequencies. However, because of the properties of the radiation and the electron beams needed to produce it, there are a number of interesting features of the storage ring that can be optimized for CSR. Furthermore, CSR has been observed in three distinct forms: as steady pulses from short bunches, bursts from growth of spontaneous modulations in high current bunches, and from micro modulations imposed on a bunch from laser slicing. These processes have their relative merits as sources and can be improved via the ring design. The terahertz (THz) and sub-THz region of the electromagnetic spectrum lies between the infrared and the microwave . This boundary region is beyond the normal reach of optical and electronic measurement techniques and sources associated with these better-known neighbors. Recent research has demonstrated a relatively high power source of THz radiation from electron storage rings: coherent synchrotron radiation (CSR). Besides offering high power, CSR enables broadband optical techniques to be extended to nearly the microwave region, and has inherently sub-picosecond pulses. As a result, new opportunities for scientific research and applications are enabled across a diverse array of disciplines: condensed matter physics, medicine, manufacturing, and space and defense industries. CSR will have a strong impact on THz imaging, spectroscopy, femtosecond dynamics, and driving novel non-linear processes. CSR is emitted by bunches of accelerated charged particles when the bunch length is shorter than the wavelength being emitted. When this criterion is met, all the particles emit in phase, and a single-cycle electromagnetic pulse results with an intensity proportional to the square of the number of particles in the bunch. It is this quadratic dependence that can
Probabilistic-based approach to optimal filtering
Hannachi
2000-04-01
The signal-to-noise ratio maximizing approach in optimal filtering provides a robust tool to detect signals in the presence of colored noise. The method fails, however, when the data present a regimelike behavior. An approach is developed in this manuscript to recover local (in phase space) behavior in an intermittent regimelike behaving system. The method is first formulated in its general form within a Gaussian framework, given an estimate of the noise covariance, and demands that the signal corresponds to minimizing the noise probability distribution for any given value, i.e., on isosurfaces, of the data probability distribution. The extension to the non-Gaussian case is provided through the use of finite mixture models for data that show regimelike behavior. The method yields the correct signal when applied in a simplified manner to synthetic time series with and without regimes, compared to the signal-to-noise ratio approach, and helps identify the right frequency of the oscillation spells in the classical and variants of the Lorenz system.
Experimental Eavesdropping Based on Optimal Quantum Cloning
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Lemr, Karel; Černoch, Antonín; Soubusta, Jan; Miranowicz, Adam
2013-04-01
The security of quantum cryptography is guaranteed by the no-cloning theorem, which implies that an eavesdropper copying transmitted qubits in unknown states causes their disturbance. Nevertheless, in real cryptographic systems some level of disturbance has to be allowed to cover, e.g., transmission losses. An eavesdropper can attack such systems by replacing a noisy channel by a better one and by performing approximate cloning of transmitted qubits which disturb them but below the noise level assumed by legitimate users. We experimentally demonstrate such symmetric individual eavesdropping on the quantum key distribution protocols of Bennett and Brassard (BB84) and the trine-state spherical code of Renes (R04) with two-level probes prepared using a recently developed photonic multifunctional quantum cloner [Lemr et al., Phys. Rev. A 85, 050307(R) (2012)PLRAAN1050-2947]. We demonstrated that our optimal cloning device with high-success rate makes the eavesdropping possible by hiding it in usual transmission losses. We believe that this experiment can stimulate the quest for other operational applications of quantum cloning.
Two-level optimization of composite wing structures based on panel genetic optimization
NASA Astrophysics Data System (ADS)
Liu, Boyang
load. The resulting response surface is used for wing-level optimization. In general, complex composite structures consist of several laminates. A common problem in the design of such structures is that some plies in the adjacent laminates terminate in the boundary between the laminates. These discontinuities may cause stress concentrations and may increase manufacturing difficulty and cost. We developed measures of continuity of two adjacent laminates. We studied tradeoffs between weight and continuity through a simple composite wing design. Finally, we compared the two-level optimization to a single-level optimization based on flexural lamination parameters. The single-level optimization is efficient and feasible for a wing consisting of unstiffened panels.
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto
Optimal policy for value-based decision-making
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-01-01
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638
Quantum-based algorithm for optimizing artificial neural networks.
Tzyy-Chyang Lu; Gwo-Ruey Yu; Jyh-Ching Juang
2013-08-01
This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure and the connection weights. Unlike most previous studies, the proposed algorithm uses quantum bit representation to codify the network. As a result, the connectivity bits do not indicate the actual links but the probability of the existence of the connections, thus alleviating mapping problems and reducing the risk of throwing away a potential candidate. In addition, in the proposed model, each weight space is decomposed into subspaces in terms of quantum bits. Thus, the algorithm performs a region by region exploration, and evolves gradually to find promising subspaces for further exploitation. This is helpful to provide a set of appropriate weights when evolving the network structure and to alleviate the noisy fitness evaluation problem. The proposed model is tested on four benchmark problems, namely breast cancer and iris, heart, and diabetes problems. The experimental results show that the proposed algorithm can produce compact ANN structures with good generalization ability compared to other algorithms.
Using Quantile and Asymmetric Least Squares Regression for Optimal Risk Adjustment.
Lorenz, Normann
2016-06-13
In this paper, we analyze optimal risk adjustment for direct risk selection (DRS). Integrating insurers' activities for risk selection into a discrete choice model of individuals' health insurance choice shows that DRS has the structure of a contest. For the contest success function (csf) used in most of the contest literature (the Tullock-csf), optimal transfers for a risk adjustment scheme have to be determined by means of a restricted quantile regression, irrespective of whether insurers are primarily engaged in positive DRS (attracting low risks) or negative DRS (repelling high risks). This is at odds with the common practice of determining transfers by means of a least squares regression. However, this common practice can be rationalized for a new csf, but only if positive and negative DRSs are equally important; if they are not, optimal transfers have to be calculated by means of a restricted asymmetric least squares regression. Using data from German and Swiss health insurers, we find considerable differences between the three types of regressions. Optimal transfers therefore critically depend on which csf represents insurers' incentives for DRS and, if it is not the Tullock-csf, whether insurers are primarily engaged in positive or negative DRS. Copyright © 2016 John Wiley & Sons, Ltd.
A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm
NASA Astrophysics Data System (ADS)
Mohanty, Prases K.; Parhi, Dayal R.
2014-12-01
Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.
Kim, Yoon Jae; Kim, Yoon Young
2010-10-01
This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.
SU-E-T-436: Fluence-Based Trajectory Optimization for Non-Coplanar VMAT
Smyth, G; Bamber, JC; Bedford, JL; Evans, PM; Saran, FH; Mandeville, HC
2015-06-15
Purpose: To investigate a fluence-based trajectory optimization technique for non-coplanar VMAT for brain cancer. Methods: Single-arc non-coplanar VMAT trajectories were determined using a heuristic technique for five patients. Organ at risk (OAR) volume intersected during raytracing was minimized for two cases: absolute volume and the sum of relative volumes weighted by OAR importance. These trajectories and coplanar VMAT formed starting points for the fluence-based optimization method. Iterative least squares optimization was performed on control points 24° apart in gantry rotation. Optimization minimized the root-mean-square (RMS) deviation of PTV dose from the prescription (relative importance 100), maximum dose to the brainstem (10), optic chiasm (5), globes (5) and optic nerves (5), plus mean dose to the lenses (5), hippocampi (3), temporal lobes (2), cochleae (1) and brain excluding other regions of interest (1). Control point couch rotations were varied in steps of up to 10° and accepted if the cost function improved. Final treatment plans were optimized with the same objectives in an in-house planning system and evaluated using a composite metric - the sum of optimization metrics weighted by importance. Results: The composite metric decreased with fluence-based optimization in 14 of the 15 plans. In the remaining case its overall value, and the PTV and OAR components, were unchanged but the balance of OAR sparing differed. PTV RMS deviation was improved in 13 cases and unchanged in two. The OAR component was reduced in 13 plans. In one case the OAR component increased but the composite metric decreased - a 4 Gy increase in OAR metrics was balanced by a reduction in PTV RMS deviation from 2.8% to 2.6%. Conclusion: Fluence-based trajectory optimization improved plan quality as defined by the composite metric. While dose differences were case specific, fluence-based optimization improved both PTV and OAR dosimetry in 80% of cases.
Jacob, Dayee Raben, Adam; Sarkar, Abhirup; Grimm, Jimm; Simpson, Larry
2008-11-01
Purpose: To perform an independent validation of an anatomy-based inverse planning simulated annealing (IPSA) algorithm in obtaining superior target coverage and reducing the dose to the organs at risk. Method and Materials: In a recent prostate high-dose-rate brachytherapy protocol study by the Radiation Therapy Oncology Group (0321), our institution treated 20 patients between June 1, 2005 and November 30, 2006. These patients had received a high-dose-rate boost dose of 19 Gy to the prostate, in addition to an external beam radiotherapy dose of 45 Gy with intensity-modulated radiotherapy. Three-dimensional dosimetry was obtained for the following optimization schemes in the Plato Brachytherapy Planning System, version 14.3.2, using the same dose constraints for all the patients treated during this period: anatomy-based IPSA optimization, geometric optimization, and dose point optimization. Dose-volume histograms were generated for the planning target volume and organs at risk for each optimization method, from which the volume receiving at least 75% of the dose (V{sub 75%}) for the rectum and bladder, volume receiving at least 125% of the dose (V{sub 125%}) for the urethra, and total volume receiving the reference dose (V{sub 100%}) and volume receiving 150% of the dose (V{sub 150%}) for the planning target volume were determined. The dose homogeneity index and conformal index for the planning target volume for each optimization technique were compared. Results: Despite suboptimal needle position in some implants, the IPSA algorithm was able to comply with the tight Radiation Therapy Oncology Group dose constraints for 90% of the patients in this study. In contrast, the compliance was only 30% for dose point optimization and only 5% for geometric optimization. Conclusions: Anatomy-based IPSA optimization proved to be the superior technique and also the fastest for reducing the dose to the organs at risk without compromising the target coverage.
NASA Astrophysics Data System (ADS)
Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.
2017-01-01
Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.
Genetic-evolution-based optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.
1990-01-01
This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.
Shah, Chirag; Vicini, Frank A.
2011-11-15
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.
Risk and Resilience in Pediatric Chronic Pain: Exploring the Protective Role of Optimism.
Cousins, Laura A; Cohen, Lindsey L; Venable, Claudia
2015-10-01
Fear of pain and pain catastrophizing are prominent risk factors for pediatric chronic pain-related maladjustment. Although resilience has largely been ignored in the pediatric pain literature, prior research suggests that optimism might benefit youth and can be learned. We applied an adult chronic pain risk-resilience model to examine the interplay of risk factors and optimism on functioning outcomes in youth with chronic pain. Participants included 58 children and adolescents (8-17 years) attending a chronic pain clinic and their parents. Participants completed measures of fear of pain, pain catastrophizing, optimism, disability, and quality of life. Consistent with the literature, pain intensity, fear of pain, and catastrophizing predicted functioning. Optimism was a unique predictor of quality of life, and optimism contributed to better functioning by minimizing pain-related fear and catastrophizing. Optimism might be protective and offset the negative influence of fear of pain and catastrophizing on pain-related functioning. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Perspective texture synthesis based on improved energy optimization.
Bashir, Syed Muhammad Arsalan; Ghouri, Farhan Ali Khan
2014-01-01
Perspective texture synthesis has great significance in many fields like video editing, scene capturing etc., due to its ability to read and control global feature information. In this paper, we present a novel example-based, specifically energy optimization-based algorithm, to synthesize perspective textures. Energy optimization technique is a pixel-based approach, so it's time-consuming. We improve it from two aspects with the purpose of achieving faster synthesis and high quality. Firstly, we change this pixel-based technique by replacing the pixel computation with a little patch. Secondly, we present a novel technique to accelerate searching nearest neighborhoods in energy optimization. Using k- means clustering technique to build a search tree to accelerate the search. Hence, we make use of principal component analysis (PCA) technique to reduce dimensions of input vectors. The high quality results prove that our approach is feasible. Besides, our proposed algorithm needs shorter time relative to other similar methods.
Optimized PCR-based detection of mycoplasma.
Dobrovolny, Paige L; Bess, Dan
2011-06-20
The maintenance of contamination-free cell lines is essential to cell-based research. Among the biggest contaminant concerns are mycoplasma contamination. Although mycoplasma do not usually kill contaminated cells, they are difficult to detect and can cause a variety of effects on cultured cells, including altered metabolism, slowed proliferation and chromosomal aberrations. In short, mycoplasma contamination compromises the value of those cell lines in providing accurate data for life science research. The sources of mycoplasma contamination in the laboratory are very challenging to completely control. As certain mycoplasma species are found on human skin, they can be introduced through poor aseptic technique. Additionally, they can come from contaminated supplements such as fetal bovine serum, and most importantly from other contaminated cell cultures. Once mycoplasma contaminates a culture, it can quickly spread to contaminate other areas of the lab. Strict adherence to good laboratory practices such as good aseptic technique are key, and routine testing for mycoplasma is highly recommended for successful control of mycoplasma contamination. PCR-based detection of mycoplasma has become a very popular method for routine cell line maintenance. PCR-based detection methods are highly sensitive and can provide rapid results, which allows researchers to respond quickly to isolate and eliminate contamination once it is detected in comparison to the time required using microbiological techniques. The LookOut Mycoplasma PCR Detection Kit is highly sensitive, with a detection limit of only 2 genomes per μl. Taking advantage of the highly specific JumpStart Taq DNA Polymerase and a proprietary primer design, false positives are greatly reduced. The convenient 8-tube format, strips pre-coated with dNTPs, and associated primers helps increase the throughput to meet the needs of customers with larger collections of cell lines. Given the extreme sensitivity of the kit, great
Optimization of wireless sensor networks based on chicken swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Wang, Qingxi; Zhu, Lihua
2017-05-01
In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.
Sequential ensemble-based optimal design for parameter estimation
NASA Astrophysics Data System (ADS)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Hybrid optimization schemes for simulation-based problems.
Fowler, Katie; Gray, Genetha Anne; Griffin, Joshua D.
2010-05-01
The inclusion of computer simulations in the study and design of complex engineering systems has created a need for efficient approaches to simulation-based optimization. For example, in water resources management problems, optimization problems regularly consist of objective functions and constraints that rely on output from a PDE-based simulator. Various assumptions can be made to simplify either the objective function or the physical system so that gradient-based methods apply, however the incorporation of realistic objection functions can be accomplished given the availability of derivative-free optimization methods. A wide variety of derivative-free methods exist and each method has both advantages and disadvantages. Therefore, to address such problems, we propose a hybrid approach, which allows the combining of beneficial elements of multiple methods in order to more efficiently search the design space. Specifically, in this paper, we illustrate the capabilities of two novel algorithms; one which hybridizes pattern search optimization with Gaussian Process emulation and the other which hybridizes pattern search and a genetic algorithm. We describe the hybrid methods and give some numerical results for a hydrological application which illustrate that the hybrids find an optimal solution under conditions for which traditional optimal search methods fail.
Optimizing treatment outcomes in patients at risk for chemotherapy-induced nausea and vomiting.
Thompson, Nancy
2012-06-01
Prevention of chemotherapy-induced nausea and vomiting (CINV) is crucial in maximizing patients' quality of life and optimizing outcomes of cancer therapy, and can be done more effectively than ever before. Appropriate antiemetic therapy combined with targeted patient education, clear communication, and management of patient expectations results in optimal emetogenic control. Oncology nurses play a critical role in the prevention and management of CINV. This column reviews the history and pathophysiology of treatments for CINV, as well as patient- and chemotherapy-specific risk factors that should be considered to optimize treatment outcomes in patients with CINV.
Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.
Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming
2016-08-01
In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.
PLS-optimal: a stepwise D-optimal design based on latent variables.
Brandmaier, Stefan; Sahlin, Ullrika; Tetko, Igor V; Öberg, Tomas
2012-04-23
Several applications, such as risk assessment within REACH or drug discovery, require reliable methods for the design of experiments and efficient testing strategies. Keeping the number of experiments as low as possible is important from both a financial and an ethical point of view, as exhaustive testing of compounds requires significant financial resources and animal lives. With a large initial set of compounds, experimental design techniques can be used to select a representative subset for testing. Once measured, these compounds can be used to develop quantitative structure-activity relationship models to predict properties of the remaining compounds. This reduces the required resources and time. D-Optimal design is frequently used to select an optimal set of compounds by analyzing data variance. We developed a new sequential approach to apply a D-Optimal design to latent variables derived from a partial least squares (PLS) model instead of principal components. The stepwise procedure selects a new set of molecules to be measured after each previous measurement cycle. We show that application of the D-Optimal selection generates models with a significantly improved performance on four different data sets with end points relevant for REACH. Compared to those derived from principal components, PLS models derived from the selection on latent variables had a lower root-mean-square error and a higher Q2 and R2. This improvement is statistically significant, especially for the small number of compounds selected.
Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization.
Yang, Qiang; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Deng, Jeremiah D; Li, Yun; Zhang, Jun
2016-10-24
Large-scale optimization has become a significant yet challenging area in evolutionary computation. To solve this problem, this paper proposes a novel segment-based predominant learning swarm optimizer (SPLSO) swarm optimizer through letting several predominant particles guide the learning of a particle. First, a segment-based learning strategy is proposed to randomly divide the whole dimensions into segments. During update, variables in different segments are evolved by learning from different exemplars while the ones in the same segment are evolved by the same exemplar. Second, to accelerate search speed and enhance search diversity, a predominant learning strategy is also proposed, which lets several predominant particles guide the update of a particle with each predominant particle responsible for one segment of dimensions. By combining these two learning strategies together, SPLSO evolves all dimensions simultaneously and possesses competitive exploration and exploitation abilities. Extensive experiments are conducted on two large-scale benchmark function sets to investigate the influence of each algorithmic component and comparisons with several state-of-the-art meta-heuristic algorithms dealing with large-scale problems demonstrate the competitive efficiency and effectiveness of the proposed optimizer. Further the scalability of the optimizer to solve problems with dimensionality up to 2000 is also verified.
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2016-06-20
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
Trust regions in Kriging-based optimization with expected improvement
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2016-06-01
The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.
Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.
Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao
2015-04-01
Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained.
Fatigue reliability based optimal design of planar compliant micropositioning stages
NASA Astrophysics Data System (ADS)
Wang, Qiliang; Zhang, Xianmin
2015-10-01
Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.
Fatigue reliability based optimal design of planar compliant micropositioning stages.
Wang, Qiliang; Zhang, Xianmin
2015-10-01
Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.
A seismic risk for the lunar base
NASA Technical Reports Server (NTRS)
Oberst, Juergen; Nakamura, Yosio
1992-01-01
Shallow moonquakes, which were discovered during observations following the Apollo lunar landing missions, may pose a threat to lunar surface operations. The nature of these moonquakes is similar to that of intraplate earthquakes, which include infrequent but destructive events. Therefore, there is a need for detailed study to assess the possible seismic risk before establishing a lunar base.
Optimizing medical data quality based on multiagent web service framework.
Wu, Ching-Seh; Khoury, Ibrahim; Shah, Hemant
2012-07-01
One of the most important issues in e-healthcare information systems is to optimize the medical data quality extracted from distributed and heterogeneous environments, which can extremely improve diagnostic and treatment decision making. This paper proposes a multiagent web service framework based on service-oriented architecture for the optimization of medical data quality in the e-healthcare information system. Based on the design of the multiagent web service framework, an evolutionary algorithm (EA) for the dynamic optimization of the medical data quality is proposed. The framework consists of two main components; first, an EA will be used to dynamically optimize the composition of medical processes into optimal task sequence according to specific quality attributes. Second, a multiagent framework will be proposed to discover, monitor, and report any inconstancy between the optimized task sequence and the actual medical records. To demonstrate the proposed framework, experimental results for a breast cancer case study are provided. Furthermore, to show the unique performance of our algorithm, a comparison with other works in the literature review will be presented.
Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm
NASA Technical Reports Server (NTRS)
Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.
2001-01-01
Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the
An Optimization-based Atomistic-to-Continuum Coupling Method
Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; ...
2014-08-21
In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less
Optimal weight based on energy imbalance and utility maximization
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2016-01-01
This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.
An Optimization-based Atomistic-to-Continuum Coupling Method
Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.
2014-08-21
In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
Aerodynamic Shape Optimization Based on Free-form Deformation
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2004-01-01
This paper presents a free-form deformation technique suitable for aerodynamic shape optimization. Because the proposed technique is independent of grid topology, we can treat structured and unstructured computational fluid dynamics grids in the same manner. The proposed technique is an alternative shape parameterization technique to a trivariate volume technique. It retains the flexibility and freedom of trivariate volumes for CFD shape optimization, but it uses a bivariate surface representation. This reduces the number of design variables by an order of magnitude, and it provides much better control for surface shape changes. The proposed technique is simple, compact, and efficient. The analytical sensitivity derivatives are independent of the design variables and are easily computed for use in a gradient-based optimization. The paper includes the complete formulation and aerodynamics shape optimization results.
A systematic optimization for graphene-based supercapacitors
NASA Astrophysics Data System (ADS)
Deuk Lee, Sung; Lee, Han Sung; Kim, Jin Young; Jeong, Jaesik; Kahng, Yung Ho
2017-08-01
Increasing the energy-storage density for supercapacitors is critical for their applications. Many researchers have attempted to identify optimal candidate component materials to achieve this goal, but investigations into systematically optimizing their mixing rate for maximizing the performance of each candidate material have been insufficient, which hinders the progress in their technology. In this study, we employ a statistically systematic method to determine the optimum mixing ratio of three components that constitute graphene-based supercapacitor electrodes: reduced graphene oxide (rGO), acetylene black (AB), and polyvinylidene fluoride (PVDF). By using the extreme-vertices design, the optimized proportion is determined to be (rGO: AB: PVDF = 0.95: 0.00: 0.05). The corresponding energy-storage density increases by a factor of 2 compared with that of non-optimized electrodes. Electrochemical and microscopic analyses are performed to determine the reason for the performance improvements.
Adjoint-based airfoil shape optimization in transonic flow
NASA Astrophysics Data System (ADS)
Gramanzini, Joe-Ray
The primary focus of this work is efficient aerodynamic shape optimization in transonic flow. Adjoint-based optimization techniques are employed on airfoil sections and evaluated in terms of computational accuracy as well as efficiency. This study examines two test cases proposed by the AIAA Aerodynamic Design Optimization Discussion Group. The first is a two-dimensional, transonic, inviscid, non-lifting optimization of a Modified-NACA 0012 airfoil. The second is a two-dimensional, transonic, viscous optimization problem using a RAE 2822 airfoil. The FUN3D CFD code of NASA Langley Research Center is used as the ow solver for the gradient-based optimization cases. Two shape parameterization techniques are employed to study their effect and the number of design variables on the final optimized shape: Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD) and the BandAids free-form deformation technique. For the two airfoil cases, angle of attack is treated as a global design variable. The thickness and camber distributions are the local design variables for MASSOUD, and selected airfoil surface grid points are the local design variables for BandAids. Using the MASSOUD technique, a drag reduction of 72.14% is achieved for the NACA 0012 case, reducing the total number of drag counts from 473.91 to 130.59. Employing the BandAids technique yields a 78.67% drag reduction, from 473.91 to 99.98. The RAE 2822 case exhibited a drag reduction from 217.79 to 132.79 counts, a 39.05% decrease using BandAids.
Optimization of Designs for Nanotube-based Scanning Probes
NASA Technical Reports Server (NTRS)
Harik, V. M.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Optimization of designs for nanotube-based scanning probes, which may be used for high-resolution characterization of nanostructured materials, is examined. Continuum models to analyze the nanotube deformations are proposed to help guide selection of the optimum probe. The limitations on the use of these models that must be accounted for before applying to any design problem are presented. These limitations stem from the underlying assumptions and the expected range of nanotube loading, end conditions, and geometry. Once the limitations are accounted for, the key model parameters along with the appropriate classification of nanotube structures may serve as a basis for the design optimization of nanotube-based probe tips.
An image morphing technique based on optimal mass preserving mapping.
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2007-06-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.
An Image Morphing Technique Based on Optimal Mass Preserving Mapping
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2013-01-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128
NASA Astrophysics Data System (ADS)
Heydari, Ali
Optimal solutions with neural networks (NN) based on an approximate dynamic programming (ADP) framework for new classes of engineering and non-engineering problems and associated difficulties and challenges are investigated in this dissertation. In the enclosed eight papers, the ADP framework is utilized for solving fixed-final-time problems (also called terminal control problems) and problems with switching nature. An ADP based algorithm is proposed in Paper 1 for solving fixed-final-time problems with soft terminal constraint, in which, a single neural network with a single set of weights is utilized. Paper 2 investigates fixed-final-time problems with hard terminal constraints. The optimality analysis of the ADP based algorithm for fixed-final-time problems is the subject of Paper 3, in which, it is shown that the proposed algorithm leads to the global optimal solution providing certain conditions hold. Afterwards, the developments in Papers 1 to 3 are used to tackle a more challenging class of problems, namely, optimal control of switching systems. This class of problems is divided into problems with fixed mode sequence (Papers 4 and 5) and problems with free mode sequence (Papers 6 and 7). Each of these two classes is further divided into problems with autonomous subsystems (Papers 4 and 6) and problems with controlled subsystems (Papers 5 and 7). Different ADP-based algorithms are developed and proofs of convergence of the proposed iterative algorithms are presented. Moreover, an extension to the developments is provided for online learning of the optimal switching solution for problems with modeling uncertainty in Paper 8. Each of the theoretical developments is numerically analyzed using different real-world or benchmark problems.
Trading risk and performance for engineering design optimization using multifidelity analyses
NASA Astrophysics Data System (ADS)
Rajnarayan, Dev Gorur
Computers pervade our lives today: from communication to calculation, their influence percolates many spheres of our existence. With continuing advances in computing, simulations are becoming increasingly complex and accurate. Powerful high-fidelity simulations mimic and predict a variety of real-life scenarios, with applications ranging from entertainment to engineering. The most accurate of such engineering simulations come at a high cost in terms of computing resources and time. Engineers use such simulations to predict the real-world performance of products they design; that is, they use them for analysis. Needless to say, the emphasis is on accuracy of the prediction. For such analysis, one would like to use the most accurate simulation available, and such a simulation is likely to be at the limits of available computing power, quite independently of advances in computing. In engineering design, however, the goal is somewhat different. Engineering design is generally posed as an optimization problem, where the goal is to tweak a set of available inputs or parameters, called design variables, to create a design that is optimal in some way, and meets some preset requirements. In other words, we would like modify the design variables in order to optimize some figure of merit, called an objective function, subject to a set of constraints, typically formulated as equations or inequalities to be satisfied. Typically, a complex engineering system such as an aircraft is described by thousands of design variables, all of which are optimized during the design process. Nevertheless, do we always need to use the highest-fidelity simulations as the objective function and constraints for engineering design? Or can we afford to use lower-fidelity simulations with appropriate corrections? In this thesis, we present a new methodology for surrogate-based optimization. Existing methods combine the possibility erroneous predictions of the low-fidelity surrogate with estimates of
Economically optimal risk reduction strategies in the face of uncertain climate thresholds
NASA Astrophysics Data System (ADS)
McInerney, D.; Keller, K.
2006-12-01
Anthropogenic greenhouse gas emissions may trigger climate threshold responses, such as a collapse of the North Atlantic meridional overturning circulation (MOC). Climate threshold responses have been interpreted as an example of "dangerous anthropogenic interference with the climate system" in the sense of the United Nations Framework Convention on Climate Change (UNFCCC). One UNFCCC objective is to "prevent" such dangerous anthropogenic interference. The current uncertainty about important parameters of the coupled natural-human system implies, however, that this UNFCCC objective can only be achieved in a probabilistic sense. In other words, climate management can only reduce - but not entirely eliminate - the risk of crossing climate thresholds. Here we use an integrated assessment model of climate change to derive economically optimal risk-reduction strategies. We implement a stochastic version of the DICE model and account for uncertainty about four parameters that have been previously identified as dominant drivers of the uncertain system response. The resulting model is, of course, just a crude approximation as it neglects, for example, some structural uncertainty and focuses on a single threshold, out of many potential climate responses. Subject to this and other caveats, our analysis suggests five main conclusions. First, reducing the numerical artifacts due to sub-sampling the parameter probability density functions to reasonable levels requires thousands of samples. Conclusions of previous studies that are based on much smaller sample sizes may hence need to be revisited. Second, following a business-as-usual (BAU) scenario results in odds for an MOC collapse in the next 150 years exceeding 1 in 3 in this model. Third, an economically "optimal" strategy (that maximizes the expected utility of the decision-maker) reduces carbon dioxide emissions by approximately 25 percent at the end of this century, compared with BAU emissions. Perhaps surprisingly, this
Risk-based regulation: A utility's perspective
Chapman, J.R. )
1993-01-01
Yankee Atomic Electric Company (YAEC) has supported the operation of several plants under the premise that regulations and corresponding implementation strategies are intended to be [open quotes]risk based.[close quotes] During the past 15 yr, these efforts have changed from essentially qualitative to a blend of qualitative and quantitative. Our observation is that implementation of regulatory requirements has often not addressed the risk significance of the underlying intent of regulations on a proportionate basis. It has caused our resource allocation to be skewed, to the point that our cost-competitiveness has eroded, but more importantly we have missed opportunities for increases in safety.
Demonstrating the benefits of template-based design-technology co-optimization
NASA Astrophysics Data System (ADS)
Liebmann, Lars; Hibbeler, Jason; Hieter, Nathaniel; Pileggi, Larry; Jhaveri, Tejas; Moe, Matthew; Rovner, Vyacheslav
2010-03-01
The concept of template-based design-technology co-optimization as a means of curbing escalating design complexity and increasing technology qualification risk is described. Data is presented highlighting the design efficacy of this proposal in terms of power, performance, and area benefits, quantifying the specific contributions of complex logic gates in this design optimization. Experimental results from 32nm technology node bulk CMOS wafers are presented to quantify the variability and design-margin reductions as well as yield and manufacturability improvements achievable with the proposed template-based design-technology co-optimization technique. The paper closes with data showing the predictable composability of individual templates, demonstrating a fundamental requirement of this proposal.
ODVBA: Optimally-Discriminative Voxel-Based Analysis
Davatzikos, Christos
2012-01-01
Gaussian smoothing of images prior to applying voxel-based statistics is an important step in Voxel-Based Analysis and Statistical Parametric Mapping (VBA-SPM), and is used to account for registration errors, to Gaussianize the data, and to integrate imaging signals from a region around each voxel. However, it has also become a limitation of VBA-SPM based methods, since it is often chosen empirically and lacks spatial adaptivity to the shape and spatial extent of the region of interest, such as a region of atrophy or functional activity. In this paper, we propose a new framework, named Optimally-Discriminative Voxel-Based Analysis (ODVBA), for determining the optimal spatially adaptive smoothing of images, followed by applying voxel-based group analysis. In ODVBA, Nonnegative Discriminative Projection is applied regionally to get the direction that best discriminates between two groups, e.g., patients and controls; this direction is equivalent to local filtering by an optimal kernel whose coefficients define the optimally discriminative direction. By considering all the neighborhoods that contain a given voxel, we then compose this information to produce the statistic for each voxel. Finally, permutation tests are used to obtain a statistical parametric map of group differences. ODVBA has been evaluated using simulated data in which the ground truth is known and with data from an Alzheimer’s disease (AD) study. The experimental results have shown that the proposed ODVBA can precisely describe the shape and location of structural abnormality. PMID:21324774
A Modeling Framework for Optimizing F 35A Strategic Basing Decisions to Meet Training Requirements
2016-01-01
Chuck Stelzner, William W. Taylor, Joseph V. Vesely A Modeling Framework for Optimizing F-35A Strategic Basing Decisions to Meet Training Requirements...www.rand.org/pubs/permissions. The RAND Corporation is a research organization that develops solutions to public policy challenges to help make...decision (Samaras et al., 2016). To begin to address this gap, PAF developed a methodology in FY 2014 to assess the cost, effectiveness, and risk
Risk assessment of soybean-based phytoestrogens.
Kwack, Seung Jun; Kim, Kyu-Bong; Kim, Hyung Sik; Yoon, Kyung Sil; Lee, Byung Mu
2009-01-01
Koreans generally consume high quantities of soybean-based foods that contain a variety of phytoestrogens, such as, daidzein, zenistein, and biochalin A. However, phytoestrogens are considered to be potential endocrine-disrupting chemicals (EDC), which interfere with the normal function of the hormonal and reproductive systems. Therefore, dietary exposure to soybean-based phytoestrogens is of concern for Koreans, and comparative dietary risk assessments are required between Japanese (high consumers) versus Americans (low consumers). In this study, a relative risk assessment was conducted based upon daily intake levels of soybean-based foods and phytoestrogens in a Korean cohort, and the risks of photoestrogens were compared with those posed by estradiol and other EDC. Koreans approximately 30-49 yr of age consume on average a total of 135.2 g/d of soy-based foods including soybean, soybean sauce, soybean paste, and soybean oil, and 0.51 mg/kg body weight (bw)/d of phytoestrogens such as daidzein and genistein. Using estimated daily intakes (EDI) and estrogenic potencies (EP), margins of safety (MOS) were calculated where 0.05 is for estradiol (MOS value <1, considered to exert a positive estrogenic effect); thus, MOS values of 1.89 for Japanese, 1.96 for Koreans, and 5.55 for Americans indicate that consumption of soybean-based foods exerted no apparent estrogenic effects, as all MOS values were all higher than 1. For other synthetic EDC used as reference values, MOS values were dieldrin 27, nonylphenol 250, butyl benzyl phthalate 321, bisphenol A 1000, biochanin A 2203, and coumesterol 2898. These results suggest that dietary exposure to phytoestrogens, such as daidzein and genistein, poses a relatively higher health risk for humans than synthetic EDC, although MOS values were all greater than 1.
Neural network based decomposition in optimal structural synthesis
NASA Technical Reports Server (NTRS)
Hajela, P.; Berke, L.
1992-01-01
The present paper describes potential applications of neural networks in the multilevel decomposition based optimal design of structural systems. The generic structural optimization problem of interest, if handled as a single problem, results in a large dimensionality problem. Decomposition strategies allow for this problem to be represented by a set of smaller, decoupled problems, for which solutions may either be obtained with greater ease or may be obtained in parallel. Neural network models derived through supervised training, are used in two distinct modes in this work. The first uses neural networks to make available efficient analysis models for use in repetitive function evaluations as required by the optimization algorithm. In the second mode, neural networks are used to represent the coupling that exists between the decomposed subproblems. The approach is illustrated by application to the multilevel decomposition-based synthesis of representative truss and frame structures.
Jiang, Fan; Wu, Hao; Yue, Haizhen; Jia, Fei; Zhang, Yibao
2017-03-01
The enhanced dosimetric performance of knowledge-based volumetric modulated arc therapy (VMAT) planning might be jointly contributed by the patient-specific optimization objectives, as estimated by the RapidPlan model, and by the potentially improved Photon Optimizer (PO) algorithm than the previous Progressive Resolution Optimizer (PRO) engine. As PO is mandatory for RapidPlan estimation but optional for conventional manual planning, appreciating the two optimizers may provide practical guidelines for the algorithm selection because knowledge-based planning may not replace the current method completely in a short run. Using a previously validated dose-volume histogram (DVH) estimation model which can produce clinically acceptable plans automatically for rectal cancer patients without interactive manual adjustment, this study reoptimized 30 historically approved plans (referred as clinical plans that were created manually with PRO) with RapidPlan solution (PO plans). Then the PRO algorithm was utilized to optimize the plans again using the same dose-volume constraints as PO plans, where the line objectives were converted as a series of point objectives automatically (PRO plans). On the basis of comparable target dose coverage, the combined applications of new objectives and PO algorithm have significantly reduced the organs-at-risk (OAR) exposure by 23.49-32.72% than the clinical plans. These discrepancies have been largely preserved after substituting PRO for PO, indicating the dosimetric improvements were mostly attributable to the refined objectives. Therefore, Eclipse users of earlier versions may instantly benefit from adopting the model-generated objectives from other RapidPlan-equipped centers, even with PRO algorithm. However, the additional contribution made by the PO relative to PRO accounted for 1.54-3.74%, suggesting PO should be selected with priority whenever available, with or without RapidPlan solution as a purchasable package. Significantly
Rutter, D R; Quine, L; Albery, I P
1998-11-01
In the first phase of a prospective investigation, a national sample of motorcyclists completed a postal questionnaire about their perceptions of risk, their behaviour on the roads and their history of accidents and spills. In the second phase a year later, they reported on their accident history and behaviour over the preceding 12 months. A total of 723 respondents completed both questionnaires. Four sets of findings are reported. First, the group as a whole showed unrealistic optimism: on average, respondents believed themselves to be less at risk than other motorcyclists of an accident needing hospital treatment in the next year. Second, optimism was tempered by 'relative realism', in that respondents who were young and inexperienced saw themselves as more at risk than other motorcyclists, as did riders who reported risky behaviours on the road. Third, there was some evidence of debiasing by personal history, in that having a friend or a relative who had been killed or injured on the roads was associated with perceptions of absolute risk of injury or death--though there were no effects on comparative risk and no effects on any of the judgments of a history of accidents of one's own. Finally, there was good evidence that perceptions of risk predicted subsequent behaviour, though generally in the direction not of precaution adoption but of precaution abandonment: the greater the perceived risk at time 1, the more frequent the risky behaviour at time 2. The implications of the findings are discussed, and possible interpretations are suggested.
Higginson, Andrew D; Fawcett, Tim W; Trimmer, Pete C; McNamara, John M; Houston, Alasdair I
2012-11-01
Animals live in complex environments in which predation risk and food availability change over time. To deal with this variability and maximize their survival, animals should take into account how long current conditions may persist and the possible future conditions they may encounter. This should affect their foraging activity, and with it their vulnerability to predation across periods of good and bad conditions. Here we develop a comprehensive theory of optimal risk allocation that allows for environmental persistence and for fluctuations in food availability as well as predation risk. We show that it is the duration of good and bad periods, independent of each other, rather than the overall proportion of time exposed to each that is the most important factor affecting behavior. Risk allocation is most pronounced when conditions change frequently, and optimal foraging activity can either increase or decrease with increasing exposure to bad conditions. When food availability fluctuates rapidly, animals should forage more when food is abundant, whereas when food availability fluctuates slowly, they should forage more when food is scarce. We also show that survival can increase as variability in predation risk increases. Our work reveals that environmental persistence should profoundly influence behavior. Empirical studies of risk allocation should therefore carefully control the duration of both good and bad periods and consider manipulating food availability as well as predation risk.
Credit risk evaluation based on social media.
Yang, Yang; Gu, Jing; Zhou, Zongfang
2016-07-01
Social media has been playing an increasingly important role in the sharing of individuals' opinions on many financial issues, including credit risk in investment decisions. This paper analyzes whether these opinions, which are transmitted through social media, can accurately predict enterprises' future credit risk. We consider financial statements oriented evaluation results based on logit and probit approaches as the benchmarks. We then conduct textual analysis to retrieve both posts and their corresponding commentaries published on two of the most popular social media platforms for financial investors in China. Professional advice from financial analysts is also investigated in this paper. We surprisingly find that the opinions extracted from both posts and commentaries surpass opinions of analysts in terms of credit risk prediction. Copyright © 2015 Elsevier Inc. All rights reserved.
TRUST-TECH based Methods for Optimization and Learning
NASA Astrophysics Data System (ADS)
Reddy, Chandan K.
2007-12-01
Many problems that arise in machine learning domain deal with nonlinearity and quite often demand users to obtain global optimal solutions rather than local optimal ones. Optimization problems are inherent in machine learning algorithms and hence many methods in machine learning were inherited from the optimization literature. Popularly known as the initialization problem, the ideal set of parameters required will significantly depend on the given initialization values. The recently developed TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterization) methodology systematically explores the subspace of the parameters to obtain a complete set of local optimal solutions. In this thesis work, we propose TRUST-TECH based methods for solving several optimization and machine learning problems. Two stages namely, the local stage and the neighborhood-search stage, are repeated alternatively in the solution space to achieve improvements in the quality of the solutions. Our methods were tested on both synthetic and real datasets and the advantages of using this novel framework are clearly manifested. This framework not only reduces the sensitivity to initialization, but also allows the flexibility for the practitioners to use various global and local methods that work well for a particular problem of interest. Other hierarchical stochastic algorithms like evolutionary algorithms and smoothing algorithms are also studied and frameworks for combining these methods with TRUST-TECH have been proposed and evaluated on several test systems.
Application of a simulated annealing optimization to a physically based erosion model.
Santos, C A G; Freire, P K M M; Arruda, P M
2012-01-01
A major risk concerning the calibration of physically based erosion models has been partly attributable to the lack of robust optimization tools. This paper presents the essential concepts and application to optimize the erosion parameters of an erosion model using data collected in an experimental basin, with a global optimization method known as simulated annealing (SA) which is suitable for solving optimization problems of large scales. The physically based erosion model that was chosen to be optimized here is the Watershed Erosion Simulation Program (WESP), which was developed for small basins to generate the hydrograph and the respective sedigraph. The field data were collected in an experimental basin located in a semiarid region of Brazil. On the basis of these results, the following erosion parameters were optimized: the soil moisture-tension parameter (N(s)) that depends also on the initial moisture content, the channel erosion parameter (a), the soil detachability factor (K(R)), and the sediment entrainment parameter by rainfall impact (K(I)), whose values could serve as initial estimates for semiarid regions within northeastern Brazil.
Game theory and risk-based leveed river system planning with noncooperation
NASA Astrophysics Data System (ADS)
Hui, Rui; Lund, Jay R.; Madani, Kaveh
2016-01-01
Optimal risk-based levee designs are usually developed for economic efficiency. However, in river systems with multiple levees, the planning and maintenance of different levees are controlled by different agencies or groups. For example, along many rivers, levees on opposite riverbanks constitute a simple leveed river system with each levee designed and controlled separately. Collaborative planning of the two levees can be economically optimal for the whole system. Independent and self-interested landholders on opposite riversides often are willing to separately determine their individual optimal levee plans, resulting in a less efficient leveed river system from an overall society-wide perspective (the tragedy of commons). We apply game theory to simple leveed river system planning where landholders on each riverside independently determine their optimal risk-based levee plans. Outcomes from noncooperative games are analyzed and compared with the overall economically optimal outcome, which minimizes net flood cost system-wide. The system-wide economically optimal solution generally transfers residual flood risk to the lower-valued side of the river, but is often impractical without compensating for flood risk transfer to improve outcomes for all individuals involved. Such compensation can be determined and implemented with landholders' agreements on collaboration to develop an economically optimal plan. By examining iterative multiple-shot noncooperative games with reversible and irreversible decisions, the costs of myopia for the future in making levee planning decisions show the significance of considering the externalities and evolution path of dynamic water resource problems to improve decision-making.
Electrochemical model based charge optimization for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Pramanik, Sourav; Anwar, Sohel
2016-05-01
In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.
Surrogate based wind farm layout optimization using manifold mapping
NASA Astrophysics Data System (ADS)
Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester
2016-09-01
High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
Optimized Color Filter Arrays for Sparse Representation Based Demosaicking.
Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian
2017-03-08
Demosaicking is the problem of reconstructing a color image from the raw image captured by a digital color camera that covers its only imaging sensor with a color filter array (CFA). Sparse representation based demosaicking has been shown to produce superior reconstruction quality. However, almost all existing algorithms in this category use the CFAs which are not specifically optimized for the algorithms. In this paper, we consider optimally designing CFAs for sparse representation based demosaicking, where the dictionary is well-chosen. The fact that CFAs correspond to the projection matrices used in compressed sensing inspires us to optimize CFAs via minimizing the mutual coherence. This is more challenging than that for traditional projection matrices because CFAs have physical realizability constraints. However, most of the existing methods for minimizing the mutual coherence require that the projection matrices should be unconstrained, making them inapplicable for designing CFAs. We consider directly minimizing the mutual coherence with the CFA's physical realizability constraints as a generalized fractional programming problem, which needs to find sufficiently accurate solutions to a sequence of nonconvex nonsmooth minimization problems. We adapt the redistributed proximal bundle method to address this issue. Experiments on benchmark images testify to the superiority of the proposed method. In particular, we show that a simple sparse representation based demosaicking algorithm with our specifically optimized CFA can outperform LSSC [1]. To the best of our knowledge, it is the first sparse representation based demosaicking algorithm that beats LSSC in terms of CPSNR.
Reliability-Based Design Optimization Using Buffered Failure Probability
2010-06-01
missile. One component of the missile’s launcher is an optical system. Suppose that two different optical systems, 1 and 2, are available for...ed.). Belmont, MA: Athena Scientific. Bichon, B. J., Mahadevan, S., & Eldred, M. S. (May 4–7, 2009). Reliability-based design optimization using
Optimized ferrozine-based assay for dissolved iron.
Jeitner, Thomas M
2014-06-01
The following report describes a simple and optimized assay for the detection of iron in solution based on the binding of this metal by ferrozine. This assay accurately measures between 1 and 200 μM sample iron concentrations within 2½ hours. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
A Risk-Based Sensor Placement Methodology
Lee, Ronald W; Kulesz, James J
2006-08-01
A sensor placement methodology is proposed to solve the problem of optimal location of sensors or detectors to protect population against the exposure to and effects of known and/or postulated chemical, biological, and/or radiological threats. Historical meteorological data are used to characterize weather conditions as wind speed and direction pairs with the percentage of occurrence of the pairs over the historical period. The meteorological data drive atmospheric transport and dispersion modeling of the threats, the results of which are used to calculate population at risk against standard exposure levels. Sensor locations are determined via a dynamic programming algorithm where threats captured or detected by sensors placed in prior stages are removed from consideration in subsequent stages. Moreover, the proposed methodology provides a quantification of the marginal utility of each additional sensor or detector. Thus, the criterion for halting the iterative process can be the number of detectors available, a threshold marginal utility value, or the cumulative detection of a minimum factor of the total risk value represented by all threats.
Risk-based Classification of Incidents
NASA Technical Reports Server (NTRS)
Greenwell, William S.; Knight, John C.; Strunk, Elisabeth A.
2003-01-01
As the penetration of software into safety-critical systems progresses, accidents and incidents involving software will inevitably become more frequent. Identifying lessons from these occurrences and applying them to existing and future systems is essential if recurrences are to be prevented. Unfortunately, investigative agencies do not have the resources to fully investigate every incident under their jurisdictions and domains of expertise and thus must prioritize certain occurrences when allocating investigative resources. In the aviation community, most investigative agencies prioritize occurrences based on the severity of their associated losses, allocating more resources to accidents resulting in injury to passengers or extensive aircraft damage. We argue that this scheme is inappropriate because it undervalues incidents whose recurrence could have a high potential for loss while overvaluing fairly straightforward accidents involving accepted risks. We then suggest a new strategy for prioritizing occurrences based on the risk arising from incident recurrence.
Parallel Harmony Search Based Distributed Energy Resource Optimization
Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin
2015-01-01
This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.
Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.
2015-07-01
The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.
Optimal diabatic dynamics of Majorana-based quantum gates
NASA Astrophysics Data System (ADS)
Rahmani, Armin; Seradjeh, Babak; Franz, Marcel
2017-08-01
In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles, such as Majorana zero modes, and are protected from local environmental perturbations. In the adiabatic regime, with timescales set by the inverse gap of the system, the errors can be made arbitrarily small by performing the process more slowly. To enhance the performance of quantum information processing with Majorana zero modes, we apply the theory of optimal control to the diabatic dynamics of Majorana-based qubits. While we sacrifice complete topological protection, we impose constraints on the optimal protocol to take advantage of the nonlocal nature of topological information and increase the robustness of our gates. By using the Pontryagin's maximum principle, we show that robust equivalent gates to perfect adiabatic braiding can be implemented in finite times through optimal pulses. In our implementation, modifications to the device Hamiltonian are avoided. Focusing on thermally isolated systems, we study the effects of calibration errors and external white and 1 /f (pink) noise on Majorana-based gates. While a noise-induced antiadiabatic behavior, where a slower process creates more diabatic excitations, prohibits indefinite enhancement of the robustness of the adiabatic scheme, our fast optimal protocols exhibit remarkable stability to noise and have the potential to significantly enhance the practical performance of Majorana-based information processing.
Model-based optimal planning of hepatic radiofrequency ablation.
Chen, Qiyong; Müftü, Sinan; Meral, Faik Can; Tuncali, Kemal; Akçakaya, Murat
2016-07-19
This article presents a model-based pre-treatment optimal planning framework for hepatic tumour radiofrequency (RF) ablation. Conventional hepatic radiofrequency (RF) ablation methods rely on pre-specified input voltage and treatment length based on the tumour size. Using these experimentally obtained pre-specified treatment parameters in RF ablation is not optimal to achieve the expected level of cell death and usually results in more healthy tissue damage than desired. In this study we present a pre-treatment planning framework that provides tools to control the levels of both the healthy tissue preservation and tumour cell death. Over the geometry of tumour and surrounding tissue, we formulate the RF ablation planning as a constrained optimization problem. With specific constraints over the temperature profile (TP) in pre-determined areas of the target geometry, we consider two different cost functions based on the history of the TP and Arrhenius index (AI) of the target location, respectively. We optimally compute the input voltage variation to minimize the damage to the healthy tissue while ensuring a complete cell death in the tumour and immediate area covering the tumour. As an example, we use a simulation of a 1D symmetric target geometry mimicking the application of single electrode RF probe. Results demonstrate that compared to the conventional methods both cost functions improve the healthy tissue preservation.
Bare-bones teaching-learning-based optimization.
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
Reliability-based analysis and design optimization for durability
NASA Astrophysics Data System (ADS)
Choi, Kyung K.; Youn, Byeng D.; Tang, Jun; Hardee, Edward
2005-05-01
In the Army mechanical fatigue subject to external and inertia transient loads in the service life of mechanical systems often leads to a structural failure due to accumulated damage. Structural durability analysis that predicts the fatigue life of mechanical components subject to dynamic stresses and strains is a compute intensive multidisciplinary simulation process, since it requires the integration of several computer-aided engineering tools and considerable data communication and computation. Uncertainties in geometric dimensions due to manufacturing tolerances cause the indeterministic nature of the fatigue life of a mechanical component. Due to the fact that uncertainty propagation to structural fatigue under transient dynamic loading is not only numerically complicated but also extremely computationally expensive, it is a challenging task to develop a structural durability-based design optimization process and reliability analysis to ascertain whether the optimal design is reliable. The objective of this paper is the demonstration of an integrated CAD-based computer-aided engineering process to effectively carry out design optimization for structural durability, yielding a durable and cost-effectively manufacturable product. This paper shows preliminary results of reliability-based durability design optimization for the Army Stryker A-Arm.
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
Vision-based stereo ranging as an optimal control problem
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.
1992-01-01
The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.
Reentry trajectory optimization based on a multistage pseudospectral method.
Zhao, Jiang; Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.
Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method
Zhou, Rui; Jin, Xuelian
2014-01-01
Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929
Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources
Huang, Weihong; Sun, Kai; Qi, Junjian; Xu, Yan
2015-01-01
Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-bus system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.
PCNN document segmentation method based on bacterial foraging optimization algorithm
NASA Astrophysics Data System (ADS)
Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian
2014-04-01
Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.
A correlation consistency based multivariate alarm thresholds optimization approach.
Gao, Huihui; Liu, Feifei; Zhu, Qunxiong
2016-11-01
Different alarm thresholds could generate different alarm data, resulting in different correlations. A new multivariate alarm thresholds optimization methodology based on the correlation consistency between process data and alarm data is proposed in this paper. The interpretative structural modeling is adopted to select the key variables. For the key variables, the correlation coefficients of process data are calculated by the Pearson correlation analysis, while the correlation coefficients of alarm data are calculated by kernel density estimation. To ensure the correlation consistency, the objective function is established as the sum of the absolute differences between these two types of correlations. The optimal thresholds are obtained using particle swarm optimization algorithm. Case study of Tennessee Eastman process is given to demonstrate the effectiveness of proposed method.
A Danger-Theory-Based Immune Network Optimization Algorithm
Li, Tao; Xiao, Xin; Shi, Yuanquan
2013-01-01
Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853
Vision-based stereo ranging as an optimal control problem
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.
1992-01-01
The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2009-01-01
A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.
CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET
Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel
2016-01-01
A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
Component-based integration of chemistry and optimization software.
Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L
2004-11-15
Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.
CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET.
Aadil, Farhan; Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel
2016-01-01
A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO.
Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis
NASA Astrophysics Data System (ADS)
Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao
2016-08-01
Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
Modification of species-based differential evolution for multimodal optimization
NASA Astrophysics Data System (ADS)
Idrus, Said Iskandar Al; Syahputra, Hermawan; Firdaus, Muliawan
2015-12-01
At this time optimization has an important role in various fields as well as between other operational research, industry, finance and management. Optimization problem is the problem of maximizing or minimizing a function of one variable or many variables, which include unimodal and multimodal functions. Differential Evolution (DE), is a random search technique using vectors as an alternative solution in the search for the optimum. To localize all local maximum and minimum on multimodal function, this function can be divided into several domain of fitness using niching method. Species-based niching method is one of method that build sub-populations or species in the domain functions. This paper describes the modification of species-based previously to reduce the computational complexity and run more efficiently. The results of the test functions show species-based modifications able to locate all the local optima in once run the program.
Hybrid Biogeography-Based Optimization for Integer Programming
Wang, Zhi-Cheng
2014-01-01
Biogeography-based optimization (BBO) is a relatively new bioinspired heuristic for global optimization based on the mathematical models of biogeography. By investigating the applicability and performance of BBO for integer programming, we find that the original BBO algorithm does not perform well on a set of benchmark integer programming problems. Thus we modify the mutation operator and/or the neighborhood structure of the algorithm, resulting in three new BBO-based methods, named BlendBBO, BBO_DE, and LBBO_LDE, respectively. Computational experiments show that these methods are competitive approaches to solve integer programming problems, and the LBBO_LDE shows the best performance on the benchmark problems. PMID:25003142
Constraint Web Service Composition Based on Discrete Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Fang, Xianwen; Fan, Xiaoqin; Yin, Zhixiang
Web service composition provides an open, standards-based approach for connecting web services together to create higher-level business processes. The Standards are designed to reduce the complexity required to compose web services, hence reducing time and costs, and increase overall efficiency in businesses. This paper present independent global constrains web service composition optimization methods based on Discrete Particle Swarm Optimization (DPSO) and associate Petri net (APN). Combining with the properties of APN, an efficient DPSO algorithm is presented which is used to search a legal firing sequence in the APN model. Using legal firing sequences of the Petri net makes the service composition locating space based on DPSO shrink greatly. Finally, for comparing our methods with the approximating methods, the simulation experiment is given out. Theoretical analysis and experimental results indicate that this method owns both lower computation cost and higher success ratio of service composition.
12 CFR 652.70 - Risk-based capital level.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Risk-based capital level. 652.70 Section 652.70... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.70 Risk-based capital level. The risk-based capital level is the sum of the following amounts: (a) Credit and interest rate risk. The...
Gasik, Michael; Van Mellaert, Lieve; Pierron, Dorothée; Braem, Annabel; Hofmans, Dorien; De Waelheyns, Evelien; Anné, Jozef; Harmand, Marie-Françoise; Vleugels, Jozef
2012-01-11
Titanium-based implants are widely used in modern clinical practice; however, complications associated with implants due to bacterial-induced infections arise frequently, caused mainly by staphylococci, streptococci, Pseudomonas spp. and coliform bacteria. Although increased hydrophilicity of the biomaterial surface is known to be beneficial in minimizing the biofilm, quantitative analyses between the actual implant parameters and bacterial development are scarce. Here, the results of in vitro studies of Staphylococcus aureus and Staphylococcus epidermidis proliferation on uncoated and coated titanium materials with different roughness, porosity, topology, and hydrophilicity are shown. The same materials have been tested in parallel with respect to human osteogenic and endothelial cell adhesion, proliferation, and differentiation. The experimental data processed by meta-analysis are indicating the possibility of decreasing the biofilm formation by 80-90% for flat substrates versus untreated plasma-sprayed porous titanium and by 65-95% for other porous titanium coatings. It is also shown that optimized surfaces would lead to 10-50% enhanced cell proliferation and differentiation versus reference porous titanium coatings. This presents an opportunity to manufacture implants with intrinsic reduced infection risk, yet without the additional use of antibacterial substances. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Arsenic speciation driving risk based corrective action.
Marlborough, Sidney J; Wilson, Vincent L
2015-07-01
The toxicity of arsenic depends on a number of factors including its valence state. The more potent trivalent arsenic [arsenite (As3+)] inhibits a large number of cellular enzymatic pathways involved in energy production, while the less toxic pentavalent arsenic [arsenate (As5+)] interferes with phosphate metabolism, phosphoproteins and ATP formation (uncoupling of oxidative phosphorylation). Environmental risk based corrective action for arsenic contamination utilizes data derived from arsenite studies of toxicity to be conservative. However, depending upon environmental conditions, the arsenate species may predominate substantially, especially in well aerated surface soils. Analyses of soil concentrations of arsenic species at two sites in northeastern Texas historically contaminated with arsenical pesticides yielded mean arsenate concentrations above 90% of total arsenic with the majority of the remainder being the trivalent arsenite species. Ecological risk assessments based on the concentration of the trivalent arsenite species will lead to restrictive remediation requirements that do not adequately reflect the level of risk associated with the predominate species of arsenic found in the soil. The greater concentration of the pentavalent arsenate species in soils would be the more appropriate species to monitor remediation at sites that contain high arsenate to arsenite ratios. Copyright © 2015 Elsevier B.V. All rights reserved.
The optimal community detection of software based on complex networks
NASA Astrophysics Data System (ADS)
Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong
2016-02-01
The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.
Computer Based Porosity Design by Multi Phase Topology Optimization
NASA Astrophysics Data System (ADS)
Burblies, Andreas; Busse, Matthias
2008-02-01
A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.
Reliability-based design optimization using efficient global reliability analysis.
Bichon, Barron J.; Mahadevan, Sankaran; Eldred, Michael Scott
2010-05-01
Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.
Chaos Time Series Prediction Based on Membrane Optimization Algorithms
Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng
2015-01-01
This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249
An evolutionary based Bayesian design optimization approach under incomplete information
NASA Astrophysics Data System (ADS)
Srivastava, Rupesh; Deb, Kalyanmoy
2013-02-01
Design optimization in the absence of complete information about uncertain quantities has been recently gaining consideration, as expensive repetitive computation tasks are becoming tractable due to the invention of faster and parallel computers. This work uses Bayesian inference to quantify design reliability when only sample measurements of the uncertain quantities are available. A generalized Bayesian reliability based design optimization algorithm has been proposed and implemented for numerical as well as engineering design problems. The approach uses an evolutionary algorithm (EA) to obtain a trade-off front between design objectives and reliability. The Bayesian approach provides a well-defined link between the amount of available information and the reliability through a confidence measure, and the EA acts as an efficient optimizer for a discrete and multi-dimensional objective space. Additionally, a GPU-based parallelization study shows computational speed-up of close to 100 times in a simulated scenario wherein the constraint qualification checks may be time consuming and could render a sequential implementation that can be impractical for large sample sets. These results show promise for the use of a parallel implementation of EAs in handling design optimization problems under uncertainties.
Surrogate-Based Optimization of Biogeochemical Transport Models
NASA Astrophysics Data System (ADS)
Prieß, Malte; Slawig, Thomas
2010-09-01
First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.
Optimization Based Tumor Classification from Microarray Gene Expression Data
Dagliyan, Onur; Uney-Yuksektepe, Fadime; Kavakli, I. Halil; Turkay, Metin
2011-01-01
Background An important use of data obtained from microarray measurements is the classification of tumor types with respect to genes that are either up or down regulated in specific cancer types. A number of algorithms have been proposed to obtain such classifications. These algorithms usually require parameter optimization to obtain accurate results depending on the type of data. Additionally, it is highly critical to find an optimal set of markers among those up or down regulated genes that can be clinically utilized to build assays for the diagnosis or to follow progression of specific cancer types. In this paper, we employ a mixed integer programming based classification algorithm named hyper-box enclosure method (HBE) for the classification of some cancer types with a minimal set of predictor genes. This optimization based method which is a user friendly and efficient classifier may allow the clinicians to diagnose and follow progression of certain cancer types. Methodology/Principal Findings We apply HBE algorithm to some well known data sets such as leukemia, prostate cancer, diffuse large B-cell lymphoma (DLBCL), small round blue cell tumors (SRBCT) to find some predictor genes that can be utilized for diagnosis and prognosis in a robust manner with a high accuracy. Our approach does not require any modification or parameter optimization for each data set. Additionally, information gain attribute evaluator, relief attribute evaluator and correlation-based feature selection methods are employed for the gene selection. The results are compared with those from other studies and biological roles of selected genes in corresponding cancer type are described. Conclusions/Significance The performance of our algorithm overall was better than the other algorithms reported in the literature and classifiers found in WEKA data-mining package. Since it does not require a parameter optimization and it performs consistently very high prediction rate on different type of
Guo, Lei; Wu, Youxi; Liu, Xuena; Li, Ying; Xu, Guizhi; Yan, Weili
2006-01-01
Intelligent Optimization Algorithm (IOA) mainly includes Immune Algorithm (IA) and Genetic Algorithm (GA). One of the most important characteristics of MRI is the complicated changes of gray level. Traditional filtering algorithms are not fit for MRI. Adaptive Template Filtering Method (ATFM) is an appropriate denoising method for MRI. However, selecting threshold for ATFM is a complicated problem which directly affects the denoising result. Threshold selection has been based on experience. Thus, it was lack of solid theoretical foundation. In this paper, 2 kinds of IOA are proposed for threshold optimization respectively. As our experiment demonstrates, they can effectively solve the problem of threshold selection and perfect ATFM. Through algorithm analysis, the performance of IA surpasses the performance of GA. As a new kind of IOA, IA exhibits its great potential in image processing.
Afshar, Puya; Brown, Martin; Maciejowski, Jan; Wang, Hong
2011-12-01
Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method.
Adaptive Flood Risk Management Under Climate Change Uncertainty Using Real Options and Optimization.
Woodward, Michelle; Kapelan, Zoran; Gouldby, Ben
2014-01-01
It is well recognized that adaptive and flexible flood risk strategies are required to account for future uncertainties. Development of such strategies is, however, a challenge. Climate change alone is a significant complication, but, in addition, complexities exist trying to identify the most appropriate set of mitigation measures, or interventions. There are a range of economic and environmental performance measures that require consideration, and the spatial and temporal aspects of evaluating the performance of these is complex. All these elements pose severe difficulties to decisionmakers. This article describes a decision support methodology that has the capability to assess the most appropriate set of interventions to make in a flood system and the opportune time to make these interventions, given the future uncertainties. The flood risk strategies have been explicitly designed to allow for flexible adaptive measures by capturing the concepts of real options and multiobjective optimization to evaluate potential flood risk management opportunities. A state-of-the-art flood risk analysis tool is employed to evaluate the risk associated to each strategy over future points in time and a multiobjective genetic algorithm is utilized to search for the optimal adaptive strategies. The modeling system has been applied to a reach on the Thames Estuary (London, England), and initial results show the inclusion of flexibility is advantageous, while the outputs provide decisionmakers with supplementary knowledge that previously has not been considered.
Block-based mask optimization for optical lithography.
Ma, Xu; Song, Zhiyang; Li, Yanqiu; Arce, Gonzalo R
2013-05-10
Pixel-based optical proximity correction (PBOPC) methods have been developed as a leading-edge resolution enhancement technique (RET) for integrated circuit fabrication. PBOPC independently modulates each pixel on the reticle, which tremendously increases the mask's complexity and, at the same time, deteriorates its manufacturability. Most current PBOPC algorithms recur to regularization methods or a mask manufacturing rule check (MRC) to improve the mask manufacturability. Typically, these approaches either fail to satisfy manufacturing constraints on the practical product line, or lead to suboptimal mask patterns that may degrade the lithographic performance. This paper develops a block-based optical proximity correction (BBOPC) algorithm to pursue the optimal masks with manufacturability compliance, where the mask is shaped by a set of overlapped basis blocks rather than pixels. BBOPC optimization is formulated based on a vector imaging model, which is adequate for both dry lithography with lower numerical aperture (NA), and immersion lithography with hyper-NA. The BBOPC algorithm successively optimizes the main features (MF) and subresolution assist features (SRAF) based on a modified conjugate gradient method. It is effective at smoothing any unmanufacturable jogs along edges. A weight matrix is introduced in the cost function to preserve the edge fidelity of the printed images. Simulations show that the BBOPC algorithm can improve lithographic imaging performance while maintaining mask manufacturing constraints.
Griffith, Nathan M; Smith, Kristen M; Schefft, Bruce K; Szaflarski, Jerzy P; Privitera, Michael D
2008-10-01
Past research has suggested that pessimistic attributional style may be a risk factor for psychopathology among patients with seizure disorders. In addition, classifying psychogenic nonepileptic seizures (PNES) into subtypes has been found to be clinically relevant. However, very few studies have addressed differences in optimism, pessimism, or neuropsychological performance among PNES subtypes. We previously classified adults with PNES into semiology-based subtypes (catatonic, minor motor, major motor). In the study described here, we compared subtypes on optimism, pessimism, depressive symptoms, and neuropsychological performance. We found that patients with PNES with low optimism had significantly greater depressive symptoms than patients with high optimism, F(2, 39)=36.49, P<0.01). Moreover, patients with high pessimism had significantly greater depressive symptoms than patients with low pessimism, F(2, 39)=13.66, P<0.01. We also found that the catatonic subtype was associated with fewer depressive symptoms and better verbal memory than the other PNES subtypes. Our results support relationships between optimism, pessimism, and depressive symptoms and extend these findings to a PNES sample. Overall, the results of the present study suggest that classification into semiology-based subtypes and study of normal personality traits among patients with PNES may have clinical significance.
Automated fluence map optimization based on fuzzy inference systems.
Dias, Joana; Rocha, Humberto; Ventura, Tiago; Ferreira, Brígida; Lopes, Maria do Carmo
2016-03-01
The planning of an intensity modulated radiation therapy treatment requires the optimization of the fluence intensities. The fluence map optimization (FMO) is many times based on a nonlinear continuous programming problem, being necessary for the planner to define a priori weights and/or lower bounds that are iteratively changed within a trial-and-error procedure until an acceptable plan is reached. In this work, the authors describe an alternative approach for FMO that releases the human planner from trial-and-error procedures, contributing for the automation of the planning process. The FMO is represented by a voxel-based convex penalty continuous nonlinear model. This model makes use of both weights and lower/upper bounds to guide the optimization process toward interesting solutions that are able to satisfy all the constraints defined for the treatment. All the model's parameters are iteratively changed by resorting to a fuzzy inference system. This system analyzes how far the current solution is from a desirable solution, changing in a completely automated way both weights and lower/upper bounds. The fuzzy inference system is based on fuzzy reasoning that enables the use of common-sense rules within an iterative optimization process. The method is built in two stages: in a first stage, an admissible solution is calculated, trying to guarantee that all the treatment planning constraints are being satisfied. In this first stage, the algorithm tries to improve as much as possible the irradiation of the planning target volumes. In a second stage, the algorithm tries to improve organ sparing, without jeopardizing tumor coverage. The proposed methodology was applied to ten head-and-neck cancer cases already treated in the Portuguese Oncology Institute of Coimbra (IPOCFG) and signalized as complex cases. IMRT treatment was considered, with 7, 9, and 11 equidistant beam angles. It was possible to obtain admissible solutions for all the patients considered and with no
[The elderly driver's perception of risk: do older drivers still express comparative optimism? ].
Spitzenstetter, Florence; Moessinger, Michelle
2008-01-01
People frequently express comparative optimism ; that is, they believe they are less likely than average to experience negative events. The aim of the present study is, first, to observe whether people of more than 65 years are still optimists when they evaluate driving-related risks; and second, to test the assumption that older drivers show less optimism when they compare themselves with average-age drivers than when they compare themselves with same-age drivers. Our results reveal that drivers of more than 65 years do, indeed, express comparative optimism, but, contrary to our expectation, only in a limited number of cases does the age of the comparison target appear to have an effect. These results are particularly discussed in terms of self-image enhancement.
[Optimized Spectral Indices Based Estimation of Forage Grass Biomass].
An, Hai-bo; Li, Fei; Zhao, Meng-li; Liu, Ya-jun
2015-11-01
As an important indicator of forage production, aboveground biomass will directly illustrate the growth of forage grass. Therefore, Real-time monitoring biomass of forage grass play a crucial role in performing suitable grazing and management in artificial and natural grassland. However, traditional sampling and measuring are time-consuming and labor-intensive. Recently, development of hyperspectral remote sensing provides the feasibility in timely and nondestructive deriving biomass of forage grass. In the present study, the main objectives were to explore the robustness of published and optimized spectral indices in estimating biomass of forage grass in natural and artificial pasture. The natural pasture with four grazing density (control, light grazing, moderate grazing and high grazing) was designed in desert steppe, and different forage cultivars with different N rate were conducted in artificial forage fields in Inner Mongolia. The canopy reflectance and biomass in each plot were measured during critical stages. The result showed that, due to the influence in canopy structure and biomass, the canopy reflectance have a great difference in different type of forage grass. The best performing spectral index varied in different species of forage grass with different treatments (R² = 0.00-0.69). The predictive ability of spectral indices decreased under low biomass of desert steppe, while red band based spectral indices lost sensitivity under moderate-high biomass of forage maize. When band combinations of simple ratio and normalized difference spectral indices were optimized in combined datasets of natural and artificial grassland, optimized spectral indices significant increased predictive ability and the model between biomass and optimized spectral indices had the highest R² (R² = 0.72) compared to published spectral indices. Sensitive analysis further confirmed that the optimized index had the lowest noise equivalent and were the best performing index in
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2001-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity. However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold. One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumbersome binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back and Dasgupta and Michalesicz. We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2000-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity (Reeves 1997). However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold (Glover 1998). One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumber-some binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back (1996) and Dasgupta and Michalesicz (1997). We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
Optimization of positrons generation based on laser wakefield electron acceleration
NASA Astrophysics Data System (ADS)
Wu, Yuchi; Han, Dan; Zhang, Tiankui; Dong, Kegong; Zhu, Bin; Yan, Yonghong; Gu, Yuqiu
2016-08-01
Laser based positron represents a new particle source with short pulse duration and high charge density. Positron production based on laser wakefield electron acceleration (LWFA) has been investigated theoretically in this paper. Analytical expressions for positron spectra and yield have been obtained through a combination of LWFA and cascade shower theories. The maximum positron yield and corresponding converter thickness have been optimized as a function of driven laser power. Under the optimal condition, high energy (>100 MeV ) positron yield up to 5 ×1011 can be produced by high power femtosecond lasers at ELI-NP. The percentage of positrons shows that a quasineutral electron-positron jet can be generated by setting the converter thickness greater than 5 radiation lengths.
Finite Element Based HWB Centerbody Structural Optimization and Weight Prediction
NASA Technical Reports Server (NTRS)
Gern, Frank H.
2012-01-01
This paper describes a scalable structural model suitable for Hybrid Wing Body (HWB) centerbody analysis and optimization. The geometry of the centerbody and primary wing structure is based on a Vehicle Sketch Pad (VSP) surface model of the aircraft and a FLOPS compatible parameterization of the centerbody. Structural analysis, optimization, and weight calculation are based on a Nastran finite element model of the primary HWB structural components, featuring centerbody, mid section, and outboard wing. Different centerbody designs like single bay or multi-bay options are analyzed and weight calculations are compared to current FLOPS results. For proper structural sizing and weight estimation, internal pressure and maneuver flight loads are applied. Results are presented for aerodynamic loads, deformations, and centerbody weight.
Multiresolution subspace-based optimization method for inverse scattering problems.
Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea
2011-10-01
This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.
Model updating based on an affine scaling interior optimization algorithm
NASA Astrophysics Data System (ADS)
Zhang, Y. X.; Jia, C. X.; Li, Jian; Spencer, B. F.
2013-11-01
Finite element model updating is usually considered as an optimization process. Affine scaling interior algorithms are powerful optimization algorithms that have been developed over the past few years. A new finite element model updating method based on an affine scaling interior algorithm and a minimization of modal residuals is proposed in this article, and a general finite element model updating program is developed based on the proposed method. The performance of the proposed method is studied through numerical simulation and experimental investigation using the developed program. The results of the numerical simulation verified the validity of the method. Subsequently, the natural frequencies obtained experimentally from a three-dimensional truss model were used to update a finite element model using the developed program. After updating, the natural frequencies of the truss and finite element model matched well.
Designing optimal number of receiving traces based on simulation model
NASA Astrophysics Data System (ADS)
Zhao, Hu; Wu, Si-Hai; Yang, Jing; Ren, Da; Xu, Wei-Xiu; Liu, Di-Ou; Zhu, Peng-Yu
2017-03-01
Currently, the selection of receiving traces in geometry design is mostly based on the horizontal layered medium hypothesis, which is unable to meet survey requirements in a complex area. This paper estimates the optimal number of receiving traces in field geometry using a numerical simulation based on a field test conducted in previous research (Zhu et al., 2011). A mathematical model is established for total energy and average efficiency energy using fixed trace spacing and optimal receiving traces are estimated. Seismic data acquired in a complex work area are used to verify the correctness of the proposed method. Results of model data calculations and actual data processing show that results are in agreement. This indicates that the proposed method is reasonable, correct, sufficiently scientific, and can be regarded as a novel method for use in seismic geometry design in complex geological regions.
Parameter optimization in differential geometry based solvation models
Wang, Bao; Wei, G. W.
2015-01-01
Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304
Pragmatic fluid optimization in high-risk surgery patients: when pragmatism dilutes the benefits.
Reuter, Daniel A
2012-01-31
There is increasing evidence that hemodynamic optimization by fluid loading, particularly when performed in the early phase of surgery, is beneficial in high-risk surgery patients: it leads to a reduction in postoperative complications and even to improved long-term outcome. However, it is also true that goal- directed strategies of fluid optimization focusing on cardiac output optimization have not been applied in the clinical routine of many institutions. Reasons are manifold: disbelief in the level of evidence and on the accuracy and practicability of the required monitoring systems, and economics. The FOCCUS trial examined perioperative fluid optimization with a very basic approach: a standardized volume load with 25 ml/kg crystalloids over 6 hours immediately prior to scheduled surgery in high-risk patients. The hypothesis was that this intervention would lead to a compensation of preoperative fluid deficit caused by overnight fasting, and would result in improved perioperative fluid homeostasis with less postoperative complications and earlier hospital discharge. However, the primary study endpoints did not improve significantly. This observation points towards the facts that: firstly, the differentiation between interstitial fluid deficit caused by fasting and intravascular volume loss due to acute blood loss must be recognized in treatment strategies; secondly, the type of fluid replacement may play an important role; and thirdly, protocolized treatment strategies should also always be tailored to suit the patients' individual needs in every individual clinical situation.
Esophagus sparing with IMRT in lung tumor irradiation: An EUD-based optimization technique
Chapet, Olivier; Thomas, Emma; Kessler, Marc L.; Fraass, Benedick A.; Ten Haken, Randall K. . E-mail: rth@umich.edu
2005-09-01
Purpose: The aim of this study was to evaluate (1) the use of generalized equivalent uniform dose (gEUD) to optimize dose escalation of lung tumors when the esophagus overlaps the planning target volume (PTV) and (2) the potential benefit of further dose escalation in only the part of the PTV that does not overlap the esophagus. Methods and Materials: The treatment-planning computed tomography (CT) scans of patients with primary lung tumors located in different regions of the left and right lung were used for the optimization of beamlet intensity modulated radiation therapy (IMRT) plans. In all cases, the PTV overlapped part of the esophagus. The dose in the PTV was maximized according to 7 different primary cost functions: 2 plans that made use of mean dose (MD) (the reference plan, in which the 95% isodose surface covered the PTV and a second plan that had no constraint on the minimum isodose), 3 plans based on maximizing gEUD for the whole PTV with ever increasing assumptions for tumor aggressiveness, and 2 plans that used different gEUD values in 2 simultaneous, overlapping target volumes (the whole PTV and the PTV minus esophagus). Beam arrangements and NTCP-based costlets for the organs at risk (OARs) were kept identical to the original conformal plan for each case. Regardless of optimization method, the relative ranking of the resulting plans was evaluated in terms of the absence of cold spots within the PTV and the final gEUD computed for the whole PTV. Results: Because the MD-optimized plans lacked a constraint on minimum PTV coverage, they resulted in cold spots that affected approximately 5% of the PTV volume. When optimizing over the whole PTV volume, gEUD-optimized plans resulted in higher equivalent uniform PTV doses than did the reference plan while still maintaining normal-tissue constraints. However, only under the assumption of extremely aggressive tumors could cold spots in the PTV be avoided. Generally, high-level overall results are obtained
[Physical process based risk assessment of groundwater pollution in the mining area].
Sun, Fa-Sheng; Cheng, Pin; Zhang, Bo
2014-04-01
Case studies of groundwater pollution risk assessment at home and abroad generally start from groundwater vulnerability, without considering the influence of characteristic pollutants on the consequences of pollution too much. Vulnerability is the natural sensitivity of the environment to pollutants. Risk assessment of groundwater pollution should reflect the movement and distribution of pollutants in groundwater. In order to improve the risk assessment theory and method of groundwater pollution, a physical process based risk assessment methodology for groundwater pollution was proposed in a mining area. According to the sensitivity of the economic and social conditions and the possible distribution of pollutants in the future, the spatial distribution of risk levels in aquifer was ranged before hand, and the pollutant source intensity corresponding to each risk level was deduced accordingly. By taking it as the criterion for the classification of groundwater pollution risk assessment, the groundwater pollution risk in the mining area was evaluated by simulating the migration of pollutants in the vadose zone and aquifer. The result show that the risk assessment method of groundwater pollution based on physical process can give the concentration distribution of pollutants and the risk level in the spatial and temporal. For single punctuate polluted area, it gives detailed risk characterization, which is better than the risk assessment method that based on aquifer intrinsic vulnerability index, and it is applicable to the risk assessment of existing polluted sites, optimizing the future sites and providing design parameters for the site construction.
Self-enhancement, crash-risk optimism and the impact of safety advertisements on young drivers.
Harré, Niki; Foster, Susan; O'neill, Maree
2005-05-01
In Study 1, young drivers (aged between 16 and 29 years, N = 314) rated their driving attributes relative to their peers. They also rated their likelihood of being involved in a crash relative to their peers (crash-risk optimism), their crash history, stereotype of the young driver, and concern over another health issue. A self-enhancement bias was found for all items in which self/other comparisons were made. These items formed two major factors, perceived relative driving ability and perceived relative driving caution. These factors and perceived luck relative to peers in avoiding crashes significantly predicted crash-risk optimism. In Study 2, an experimental group of young drivers (N = 173) watched safety advertisements that showed drinking and dangerous driving resulting in a crash, and a control group (N = 193) watched advertisements showing people choosing not to drive after drinking. Each group then completed the self/other comparisons used in Study 1. The same factors were found, but only driving caution significantly predicted crash-risk optimism. The experimental group showed more self-enhancement on driving ability than the control group. In both studies, men showed substantially more self-enhancement than women about their driving ability. Implications for safety interventions are discussed.
An Optimality-Based Fully-Distributed Watershed Ecohydrological Model
NASA Astrophysics Data System (ADS)
Chen, L., Jr.
2015-12-01
Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial
Process optimization electrospinning fibrous material based on polyhydroxybutyrate
NASA Astrophysics Data System (ADS)
Olkhov, A. A.; Tyubaeva, P. M.; Staroverova, O. V.; Mastalygina, E. E.; Popov, A. A.; Ischenko, A. A.; Iordanskii, A. L.
2016-05-01
The article analyzes the influence of the main technological parameters of electrostatic spinning on the morphology and properties of ultrathin fibers on the basis of polyhydroxybutyrate. It is found that the electric conductivity and viscosity of the spinning solution affects the process of forming fibers macrostructure. The fiber-based materials PHB lets control geometry and optimize the viscosity and conductivity of a spinning solution. The resulting fibers have found use in medicine, particularly in the construction elements musculoskeletal.
Information fusion based optimal control for large civil aircraft system.
Zhen, Ziyang; Jiang, Ju; Wang, Xinhua; Gao, Chen
2015-03-01
Wind disturbance has a great influence on landing security of Large Civil Aircraft. Through simulation research and engineering experience, it can be found that PID control is not good enough to solve the problem of restraining the wind disturbance. This paper focuses on anti-wind attitude control for Large Civil Aircraft in landing phase. In order to improve the riding comfort and the flight security, an information fusion based optimal control strategy is presented to restrain the wind in landing phase for maintaining attitudes and airspeed. Data of Boeing707 is used to establish a nonlinear mode with total variables of Large Civil Aircraft, and then two linear models are obtained which are divided into longitudinal and lateral equations. Based on engineering experience, the longitudinal channel adopts PID control and C inner control to keep longitudinal attitude constant, and applies autothrottle system for keeping airspeed constant, while an information fusion based optimal regulator in the lateral control channel is designed to achieve lateral attitude holding. According to information fusion estimation, by fusing hard constraint information of system dynamic equations and the soft constraint information of performance index function, optimal estimation of the control sequence is derived. Based on this, an information fusion state regulator is deduced for discrete time linear system with disturbance. The simulation results of nonlinear model of aircraft indicate that the information fusion optimal control is better than traditional PID control, LQR control and LQR control with integral action, in anti-wind disturbance performance in the landing phase. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal design of SAW-based gyroscope to improve sensitivity
NASA Astrophysics Data System (ADS)
Oh, Haekwan; Yang, Sangsik; Lee, Keekeun
2010-02-01
A surface acoustic wave (SAW)-based gyroscope was developed on a piezoelectric substrate. The developed gyroscope consists of two SAW oscillators, metallic dots, and absorber. Coupling of mode (COM) modeling was conducted to determine the optimal device parameters prior to fabrication. Depending on the angular velocity, the difference of the oscillation frequency was modulated. The obtained sensitivity was approximately 52.35 Hz/deg.s at an angular rate range of 0~1000 deg/s.
NASA Astrophysics Data System (ADS)
Bulgakov, V. K.; Strigunov, V. V.
2009-05-01
The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.
Optimal sleep duration in the subarctic with respect to obesity risk is 8-9 hours.
Johnsen, May Trude; Wynn, Rolf; Bratlid, Trond
2013-01-01
Sleep duration, chronotype and social jetlag have been associated with body mass index (BMI) and abdominal obesity. The optimal sleep duration regarding BMI has previously been found to be 7-8 hours, but these studies have not been carried out in the subarctic or have lacked some central variables. The aims of our study were to examine the associations between sleep variables and body composition for people living in the subarctic, taking a range of variables into consideration, including lifestyle variables, health variables and biological factors. The cross sectional population Tromsø Study was conducted in northern Norway, above the Arctic Circle. 6413 persons aged 30-65 years completed questionnaires including self-reported sleep times, lifestyle and health. They also measured height, weight, waist and hip circumference, and biological factors (non-fasting serum level of cholesterol, HDL-cholesterol, LDL-cholesterol, triglycerides and glucose). The study period was from 1 October 2007 to 19 December 2008. The optimal sleep length regarding BMI and waist circumference was found to be 8-9 hours. Short sleepers (<6 h) had about 80% increased risk of being in the BMI≥25 kg/m2 group and male short sleepers had doubled risk of having waist circumference ≥102 cm compared to 8-9 hours sleepers. We found no impact of chronotype or social jetlag on BMI or abdominal obesity after controlling for health, lifestyle, and biological parameters. In our subarctic population, the optimal sleep duration time regarding risk of overweight and abdominal obesity was 8-9 hours, which is one hour longer compared to findings from other studies. Short sleepers had 80% increased risk of being overweight, and men had a doubled risk of having abdominal obesity. We found no associations between chronotype or social jetlag and BMI or abdominal obesity, when we took a range of life-style, health and biological variables into consideration.
Efficient variational Bayesian approximation method based on subspace optimization.
Zheng, Yuling; Fraysse, Aurélia; Rodet, Thomas
2015-02-01
Variational Bayesian approximations have been widely used in fully Bayesian inference for approximating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large dimensional problems. To address this problem, we propose in this paper a more efficient VBA method. Actually, variational Bayesian issue can be seen as a functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the involved function space, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and demonstrate its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show a notable improvement in computation time.
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-01-01
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500
Optimal network topology for structural robustness based on natural connectivity
NASA Astrophysics Data System (ADS)
Peng, Guan-sheng; Wu, Jun
2016-02-01
The structural robustness of the infrastructure of various real-life systems, which can be represented by networks, is of great importance. Thus we have proposed a tabu search algorithm to optimize the structural robustness of a given network by rewiring the links and fixing the node degrees. The objective of our algorithm is to maximize a new structural robustness measure, natural connectivity, which provides a sensitive and reliable measure of the structural robustness of complex networks and has lower computation complexity. We initially applied this method to several networks with different degree distributions for contrast analysis and investigated the basic properties of the optimal network. We discovered that the optimal network based on the power-law degree distribution exhibits a roughly "eggplant-like" topology, where there is a cluster of high-degree nodes at the head and other low-degree nodes scattered across the body of "eggplant". Additionally, the cost to rewire links in practical applications is considered; therefore, we optimized this method by employing the assortative rewiring strategy and validated its efficiency.
Nanodosimetry-Based Plan Optimization for Particle Therapy
Casiraghi, Margherita; Schulte, Reinhard W.
2015-01-01
Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. PMID:26167202
A global optimization paradigm based on change of measures
Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan
2015-01-01
A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as ‘scrambling’ and ‘selection’. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-04-17
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors.
Cat swarm optimization based evolutionary framework for multi document summarization
NASA Astrophysics Data System (ADS)
Rautray, Rasmita; Balabantaray, Rakesh Chandra
2017-07-01
Today, World Wide Web has brought us enormous quantity of on-line information. As a result, extracting relevant information from massive data has become a challenging issue. In recent past text summarization is recognized as one of the solution to extract useful information from vast amount documents. Based on number of documents considered for summarization, it is categorized as single document or multi document summarization. Rather than single document, multi document summarization is more challenging for the researchers to find accurate summary from multiple documents. Hence in this study, a novel Cat Swarm Optimization (CSO) based multi document summarizer is proposed to address the problem of multi document summarization. The proposed CSO based model is also compared with two other nature inspired based summarizer such as Harmony Search (HS) based summarizer and Particle Swarm Optimization (PSO) based summarizer. With respect to the benchmark Document Understanding Conference (DUC) datasets, the performance of all algorithms are compared in terms of different evaluation metrics such as ROUGE score, F score, sensitivity, positive predicate value, summary accuracy, inter sentence similarity and readability metric to validate non-redundancy, cohesiveness and readability of the summary respectively. The experimental analysis clearly reveals that the proposed approach outperforms the other summarizers included in the study.
Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization.
He, Xiangzhu; Huang, Jida; Rao, Yunqing; Gao, Liang
2016-01-01
Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO.
Basin structure of optimization based state and parameter estimation.
Schumann-Bischoff, Jan; Parlitz, Ulrich; Abarbanel, Henry D I; Kostuk, Mark; Rey, Daniel; Eldridge, Michael; Luther, Stefan
2015-05-01
Most data based state and parameter estimation methods require suitable initial values or guesses to achieve convergence to the desired solution, which typically is a global minimum of some cost function. Unfortunately, however, other stable solutions (e.g., local minima) may exist and provide suboptimal or even wrong estimates. Here, we demonstrate for a 9-dimensional Lorenz-96 model how to characterize the basin size of the global minimum when applying some particular optimization based estimation algorithm. We compare three different strategies for generating suitable initial guesses, and we investigate the dependence of the solution on the given trajectory segment (underlying the measured time series). To address the question of how many state variables have to be measured for optimal performance, different types of multivariate time series are considered consisting of 1, 2, or 3 variables. Based on these time series, the local observability of state variables and parameters of the Lorenz-96 model is investigated and confirmed using delay coordinates. This result is in good agreement with the observation that correct state and parameter estimation results are obtained if the optimization algorithm is initialized with initial guesses close to the true solution. In contrast, initialization with other exact solutions of the model equations (different from the true solution used to generate the time series) typically fails, i.e., the optimization procedure ends up in local minima different from the true solution. Initialization using random values in a box around the attractor exhibits success rates depending on the number of observables and the available time series (trajectory segment).
Basin structure of optimization based state and parameter estimation
NASA Astrophysics Data System (ADS)
Schumann-Bischoff, Jan; Parlitz, Ulrich; Abarbanel, Henry D. I.; Kostuk, Mark; Rey, Daniel; Eldridge, Michael; Luther, Stefan
2015-05-01
Most data based state and parameter estimation methods require suitable initial values or guesses to achieve convergence to the desired solution, which typically is a global minimum of some cost function. Unfortunately, however, other stable solutions (e.g., local minima) may exist and provide suboptimal or even wrong estimates. Here, we demonstrate for a 9-dimensional Lorenz-96 model how to characterize the basin size of the global minimum when applying some particular optimization based estimation algorithm. We compare three different strategies for generating suitable initial guesses, and we investigate the dependence of the solution on the given trajectory segment (underlying the measured time series). To address the question of how many state variables have to be measured for optimal performance, different types of multivariate time series are considered consisting of 1, 2, or 3 variables. Based on these time series, the local observability of state variables and parameters of the Lorenz-96 model is investigated and confirmed using delay coordinates. This result is in good agreement with the observation that correct state and parameter estimation results are obtained if the optimization algorithm is initialized with initial guesses close to the true solution. In contrast, initialization with other exact solutions of the model equations (different from the true solution used to generate the time series) typically fails, i.e., the optimization procedure ends up in local minima different from the true solution. Initialization using random values in a box around the attractor exhibits success rates depending on the number of observables and the available time series (trajectory segment).
Zhou, Yanju; Chen, Qian; Chen, Xiaohong; Wang, Zongrun
2014-01-01
This paper considers a decentralized supply chain in which a single supplier sells a perishable product to a single retailer facing uncertain demand. We assume that the supplier and the retailer are both risk averse and utilize Conditional Value at Risk (CVaR), a risk measure method which is popularized in financial risk management, to estimate their risk attitude. We establish a buyback policy model based on Stackelberg game theory under considering supply chain members' risk preference and get the expressions of the supplier's optimal repurchase price and the retailer's optimal order quantity which are compared with those under risk neutral case. Finally, a numerical example is applied to simulate that model and prove related conclusions. PMID:25247605
Zhou, Yanju; Chen, Qian; Chen, Xiaohong; Wang, Zongrun
2014-01-01
This paper considers a decentralized supply chain in which a single supplier sells a perishable product to a single retailer facing uncertain demand. We assume that the supplier and the retailer are both risk averse and utilize Conditional Value at Risk (CVaR), a risk measure method which is popularized in financial risk management, to estimate their risk attitude. We establish a buyback policy model based on Stackelberg game theory under considering supply chain members' risk preference and get the expressions of the supplier's optimal repurchase price and the retailer's optimal order quantity which are compared with those under risk neutral case. Finally, a numerical example is applied to simulate that model and prove related conclusions.
Kim, Hye Kyung; Lwin, May O
2016-09-09
Although culture is acknowledged as an important factor that influences health, little is known about cultural differences pertaining to cancer-related beliefs and prevention behaviors. This study examines two culturally influenced beliefs-fatalistic beliefs about cancer prevention, and optimistic beliefs about cancer risk-to identify reasons for cultural disparity in the engagement of cancer prevention behaviors. We utilized data from national surveys of European Americans in the United States (Health Information National Trends Survey 4, Cycle3; N = 1,139) and Asians in Singapore (N = 1,200) to make cultural comparisons. The odds of an Asian adhering to prevention recommendations were less than half the odds of a European American, with the exception of smoking avoidance. Compared to European Americans, Asians were more optimistic about their cancer risk both in an absolute and a comparative sense, and held stronger fatalistic beliefs about cancer prevention. Mediation analyses revealed that fatalistic beliefs and absolute risk optimism among Asians partially explain their lower engagement in prevention behaviors, whereas comparative risk optimism increases their likelihood of adhering to prevention behaviors. Our findings underscore the need for developing culturally targeted interventions in communicating cancer causes and prevention.
On the Integration of Risk Aversion and Average-Performance Optimization in Reservoir Control
NASA Astrophysics Data System (ADS)
Nardini, Andrea; Piccardi, Carlo; Soncini-Sessa, Rodolfo
1992-02-01
The real-time operation of a reservoir is a matter of trade-off between the two criteria of risk aversion (to avoid dramatic failures) and average-performance optimization (to yield the best long-term average performance). A methodology taking into account both criteria is presented m this paper to derive "off-line" infinite-horizon control policies for a single multipurpose reservoir, where the management goals are water supply and flood control. According to this methodology, the reservoir control policy is derived in two steps: First, a (min-max) risk aversion problem is formulated, whose solution is not unique, but rather a whole set of policies, all equivalent from the point of view of the risk-aversion objectives. Second, a stochastic average-performance optimization problem is solved, to select from the set previously obtained the best policy from the point of view of the average-performance objectives. The methodology has several interesting features: the rnin-max (or "guaranteed performance") approach, which is particularly suited whenever "weak" users are affected by the consequences of the decision-making process; the flexible definition of a "risk aversion degree," by the selection of those inflow sequences which are particularly feared; and the two-objective analysis which provides the manager with a whole set of alternatives among which he (she) will select the one that yields the desired trade-off between the management goals.
NASA Technical Reports Server (NTRS)
Kerstman, Eric; Walton, Marlei; Minard, Charles; Saile, Lynn; Myers, Jerry; Butler, Doug; Lyengar, Sriram; Fitts, Mary; Johnson-Throop, Kathy
2009-01-01
The Integrated Medical Model (IMM) is a decision support tool used by medical system planners and designers as they prepare for exploration planning activities of the Constellation program (CxP). IMM provides an evidence-based approach to help optimize the allocation of in-flight medical resources for a specified level of risk within spacecraft operational constraints. Eighty medical conditions and associated resources are represented in IMM. Nine conditions are due to Space Adaptation Syndrome. The IMM helps answer fundamental medical mission planning questions such as What medical conditions can be expected? What type and quantity of medical resources are most likely to be used?", and "What is the probability of crew death or evacuation due to medical events?" For a specified mission and crew profile, the IMM effectively characterizes the sequence of events that could potentially occur should a medical condition happen. The mathematical relationships among mission and crew attributes, medical conditions and incidence data, in-flight medical resources, potential clinical and crew health end states are established to generate end state probabilities. A Monte Carlo computational method is used to determine the probable outcomes and requires up to 25,000 mission trials to reach convergence. For each mission trial, the pharmaceuticals and supplies required to diagnose and treat prevalent medical conditions are tracked and decremented. The uncertainty of patient response to treatment is bounded via a best-case, worst-case, untreated case algorithm. A Crew Health Index (CHI) metric, developed to account for functional impairment due to a medical condition, provides a quantified measure of risk and enables risk comparisons across mission scenarios. The use of historical in-flight medical data, terrestrial surrogate data as appropriate, and space medicine subject matter expertise has enabled the development of a probabilistic, stochastic decision support tool capable of
NASA Technical Reports Server (NTRS)
Kerstman, Eric; Walton, Marlei; Minard, Charles; Saile, Lynn; Myers, Jerry; Butler, Doug; Lyengar, Sriram; Fitts, Mary; Johnson-Throop, Kathy
2009-01-01
The Integrated Medical Model (IMM) is a decision support tool used by medical system planners and designers as they prepare for exploration planning activities of the Constellation program (CxP). IMM provides an evidence-based approach to help optimize the allocation of in-flight medical resources for a specified level of risk within spacecraft operational constraints. Eighty medical conditions and associated resources are represented in IMM. Nine conditions are due to Space Adaptation Syndrome. The IMM helps answer fundamental medical mission planning questions such as What medical conditions can be expected? What type and quantity of medical resources are most likely to be used?", and "What is the probability of crew death or evacuation due to medical events?" For a specified mission and crew profile, the IMM effectively characterizes the sequence of events that could potentially occur should a medical condition happen. The mathematical relationships among mission and crew attributes, medical conditions and incidence data, in-flight medical resources, potential clinical and crew health end states are established to generate end state probabilities. A Monte Carlo computational method is used to determine the probable outcomes and requires up to 25,000 mission trials to reach convergence. For each mission trial, the pharmaceuticals and supplies required to diagnose and treat prevalent medical conditions are tracked and decremented. The uncertainty of patient response to treatment is bounded via a best-case, worst-case, untreated case algorithm. A Crew Health Index (CHI) metric, developed to account for functional impairment due to a medical condition, provides a quantified measure of risk and enables risk comparisons across mission scenarios. The use of historical in-flight medical data, terrestrial surrogate data as appropriate, and space medicine subject matter expertise has enabled the development of a probabilistic, stochastic decision support tool capable of
Equipment management risk rating system based on engineering endpoints.
James, P J
1999-01-01
The equipment management risk ratings system outlined here offers two significant departures from current practice: risk classifications are based on intrinsic device risks, and the risk rating system is based on engineering endpoints. Intrinsic device risks are categorized as physical, clinical and technical, and these flow from the incoming equipment assessment process. Engineering risk management is based on verification of engineering endpoints such as clinical measurements or energy delivery. This practice eliminates the ambiguity associated with ranking risk in terms of physiologic and higher-level outcome endpoints such as no significant hazards, low significance, injury, or mortality.
OPTIMIZATION BIAS IN ENERGY-BASED STRUCTURE PREDICTION.
Petrella, Robert J
2013-12-01
Physics-based computational approaches to predicting the structure of macromolecules such as proteins are gaining increased use, but there are remaining challenges. In the current work, it is demonstrated that in energy-based prediction methods, the degree of optimization of the sampled structures can influence the prediction results. In particular, discrepancies in the degree of local sampling can bias the predictions in favor of the oversampled structures by shifting the local probability distributions of the minimum sampled energies. In simple systems, it is shown that the magnitude of the errors can be calculated from the energy surface, and for certain model systems, derived analytically. Further, it is shown that for energy wells whose forms differ only by a randomly assigned energy shift, the optimal accuracy of prediction is achieved when the sampling around each structure is equal. Energy correction terms can be used in cases of unequal sampling to reproduce the total probabilities that would occur under equal sampling, but optimal corrections only partially restore the prediction accuracy lost to unequal sampling. For multiwell systems, the determination of the correction terms is a multibody problem; it is shown that the involved cross-correlation multiple integrals can be reduced to simpler integrals. The possible implications of the current analysis for macromolecular structure prediction are discussed.
Constrained Multiobjective Optimization Algorithm Based on Immune System Model.
Qian, Shuqu; Ye, Yongqiang; Jiang, Bin; Wang, Jianhong
2016-09-01
An immune optimization algorithm, based on the model of biological immune system, is proposed to solve multiobjective optimization problems with multimodal nonlinear constraints. First, the initial population is divided into feasible nondominated population and infeasible/dominated population. The feasible nondominated individuals focus on exploring the nondominated front through clone and hypermutation based on a proposed affinity design approach, while the infeasible/dominated individuals are exploited and improved via the simulated binary crossover and polynomial mutation operations. And then, to accelerate the convergence of the proposed algorithm, a transformation technique is applied to the combined population of the above two offspring populations. Finally, a crowded-comparison strategy is used to create the next generation population. In numerical experiments, a series of benchmark constrained multiobjective optimization problems are considered to evaluate the performance of the proposed algorithm and it is also compared to several state-of-art algorithms in terms of the inverted generational distance and hypervolume indicators. The results indicate that the new method achieves competitive performance and even statistically significant better results than previous algorithms do on most of the benchmark suite.
Optimal alignment of mirror based pentaprisms for scanning deflectometric devices
Barber, Samuel K.; Geckeler, Ralf D.; Yashchuk, Valeriy V.; Gubarev, Mikhail V.; Buchheim, Jana; Siewert, Frank; Zeschke, Thomas
2011-03-04
In the recent work [Proc. of SPIE 7801, 7801-2/1-12 (2010), Opt. Eng. 50(5) (2011), in press], we have reported on improvement of the Developmental Long Trace Profiler (DLTP), a slope measuring profiler available at the Advanced Light Source Optical Metrology Laboratory, achieved by replacing the bulk pentaprism with a mirror based pentaprism (MBPP). An original experimental procedure for optimal mutual alignment of the MBPP mirrors has been suggested and verified with numerical ray tracing simulations. It has been experimentally shown that the optimally aligned MBPP allows the elimination of systematic errors introduced by inhomogeneity of the optical material and fabrication imperfections of the bulk pentaprism. In the present article, we provide the analytical derivation and verification of easily executed optimal alignment algorithms for two different designs of mirror based pentaprisms. We also provide an analytical description for the mechanism for reduction of the systematic errors introduced by a typical high quality bulk pentaprism. It is also shown that residual misalignments of an MBPP introduce entirely negligible systematic errors in surface slope measurements with scanning deflectometric devices.
Optimizing legacy molecular dynamics software with directive-based offload
NASA Astrophysics Data System (ADS)
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.
2015-10-01
Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.
Divergent nematic susceptibility of optimally doped Fe-based superconductors
NASA Astrophysics Data System (ADS)
Chu, Jiun-Haw; Kuo, Hsueh-Hui; Fisher, Ian
2015-03-01
By performing differential elastoresistivity measurements on a wider range of iron based superconductors, including electron doped (Ba(Fe1-xCox)2As2, Ba(Fe1-xNix)2As2),holedoped(Ba1-xKxFe2As2), isovalent substituted pnictides (BaFe2(As1-xPx)2) and chalcogenides (FeTe1-xSex), we show that a divergent nematic susceptibility in the B2g symmetry channel appears to be a generic feature of optimally doped compositions. For the specific case of optimally ``doped'' BaFe2(As1-xPx)2, the nematic susceptibility can be well fitted by a Curie-Weiss temperature dependence with critical temperature close to zero, consistent with expectations of quantum critical behavior in the absence of disorder. However for all the other optimal doped iron based superconductors, the nematic susceptibility exhibits a downward deviation from Curie-Weiss behavior, suggestive of an important role played by disorder.
Optimizing legacy molecular dynamics software with directive-based offload
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; ...
2015-05-14
The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less
Optimizing legacy molecular dynamics software with directive-based offload
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.
2015-05-14
The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.
Model-based optimization of tapered free-electron lasers
NASA Astrophysics Data System (ADS)
Mak, Alan; Curbis, Francesca; Werin, Sverker
2015-04-01
The energy extraction efficiency is a figure of merit for a free-electron laser (FEL). It can be enhanced by the technique of undulator tapering, which enables the sustained growth of radiation power beyond the initial saturation point. In the development of a single-pass x-ray FEL, it is important to exploit the full potential of this technique and optimize the taper profile aw(z ). Our approach to the optimization is based on the theoretical model by Kroll, Morton, and Rosenbluth, whereby the taper profile aw(z ) is not a predetermined function (such as linear or exponential) but is determined by the physics of a resonant particle. For further enhancement of the energy extraction efficiency, we propose a modification to the model, which involves manipulations of the resonant particle's phase. Using the numerical simulation code GENESIS, we apply our model-based optimization methods to a case of the future FEL at the MAX IV Laboratory (Lund, Sweden), as well as a case of the LCLS-II facility (Stanford, USA).
OPTIMIZATION BIAS IN ENERGY-BASED STRUCTURE PREDICTION
Petrella, Robert J.
2014-01-01
Physics-based computational approaches to predicting the structure of macromolecules such as proteins are gaining increased use, but there are remaining challenges. In the current work, it is demonstrated that in energy-based prediction methods, the degree of optimization of the sampled structures can influence the prediction results. In particular, discrepancies in the degree of local sampling can bias the predictions in favor of the oversampled structures by shifting the local probability distributions of the minimum sampled energies. In simple systems, it is shown that the magnitude of the errors can be calculated from the energy surface, and for certain model systems, derived analytically. Further, it is shown that for energy wells whose forms differ only by a randomly assigned energy shift, the optimal accuracy of prediction is achieved when the sampling around each structure is equal. Energy correction terms can be used in cases of unequal sampling to reproduce the total probabilities that would occur under equal sampling, but optimal corrections only partially restore the prediction accuracy lost to unequal sampling. For multiwell systems, the determination of the correction terms is a multibody problem; it is shown that the involved cross-correlation multiple integrals can be reduced to simpler integrals. The possible implications of the current analysis for macromolecular structure prediction are discussed. PMID:25552783
Optimization of waverider-based hypersonic vehicle designs
NASA Astrophysics Data System (ADS)
Takashima, Naruhisa
1997-09-01
A hypersonic vehicle model based on osculating cones waverider is developed. The osculating cones waverider is designed to produce uniform flow for a two-dimensional multiple inlet ramp design. A quasi-one-dimensional combustor model with paritial equilibrium combustion is used to calculate the flow through a scramjet engine. The method of osculating cones, the off-design performance calculation of osculating cone waverider, and the vehicle model have been validated with CFD calculations. The vehicle model is used to optimize the vehicle design for Mach 10 cruise mission using the sequential quadradic programming method. The design is optimized maxium L/D, range coefficient, and cruise range for a fixed Mach 10 cruise condition. The L/D of the maximum L/D design is 4.26 which is 9.79% greater than the L/D of the maximum cruise range design; however, the cruise range of the maximum L/D design is 5.70% less than the maximum cruise range design. The design of the maximum cruise range design and the maximum range coefficient design is nearly identical for the present vehicle model. The design is also optimized for maximum range along a qsb{infty} = 1000 psf trajectory from Msb{infty} = 6 to Msb{infty} = 10 to assess the effects of the off-design performance on the Mach 10 designed shape. The difference between the range along the hypersonic trajectory for the optimized design and the cruise range optimized design is only 0.49%. The effects of varying the waverider design Mach number on the optimized Mach 10 cruise range design are studied. The waverider design Mach number is varied from Mach 6 to Mach 12 in increments of Mach 2. The difference in the optimized cruise range performances is less than one percent; however, the anhedral angle decreases by 28% as the design Mach number is increased which makes the Mach 12 design the most favorable in terms of structural considerations.
Neural network based optimal control of HVAC&R systems
NASA Astrophysics Data System (ADS)
Ning, Min
Heating, Ventilation, Air-Conditioning and Refrigeration (HVAC&R) systems have wide applications in providing a desired indoor environment for different types of buildings. It is well acknowledged that 30%-40% of the total energy generated is consumed by buildings and HVAC&R systems alone account for more than 50% of the building energy consumption. Low operational efficiency especially under partial load conditions and poor control are part of reasons for such high energy consumption. To improve energy efficiency, HVAC&R systems should be properly operated to maintain a comfortable and healthy indoor environment under dynamic ambient and indoor conditions with the least energy consumption. This research focuses on the optimal operation of HVAC&R systems. The optimization problem is formulated and solved to find the optimal set points for the chilled water supply temperature, discharge air temperature and AHU (air handling unit) fan static pressure such that the indoor environment is maintained with the least chiller and fan energy consumption. To achieve this objective, a dynamic system model is developed first to simulate the system behavior under different control schemes and operating conditions. The system model is modular in structure, which includes a water-cooled vapor compression chiller model and a two-zone VAV system model. A fuzzy-set based extended transformation approach is then applied to investigate the uncertainties of this model caused by uncertain parameters and the sensitivities of the control inputs with respect to the interested model outputs. A multi-layer feed forward neural network is constructed and trained in unsupervised mode to minimize the cost function which is comprised of overall energy cost and penalty cost when one or more constraints are violated. After training, the network is implemented as a supervisory controller to compute the optimal settings for the system. In order to implement the optimal set points predicted by the
Optimization-based design of a heat flux concentrator.
Peralta, Ignacio; Fachinotti, Víctor D; Ciarbonetti, Ángel A
2017-01-13
To gain control over the diffusive heat flux in a given domain, one needs to engineer a thermal metamaterial with a specific distribution of the generally anisotropic thermal conductivity throughout the domain. Until now, the appropriate conductivity distribution was usually determined using transformation thermodynamics. By this way, only a few particular cases of heat flux control in simple domains having simple boundary conditions were studied. Thermal metamaterials based on optimization algorithm provides superior properties compared to those using the previous methods. As a more general approach, we propose to define the heat control problem as an optimization problem where we minimize the error in guiding the heat flux in a given way, taking as design variables the parameters that define the variable microstructure of the metamaterial. In the present study we numerically demonstrate the ability to manipulate heat flux by designing a device to concentrate the thermal energy to its center without disturbing the temperature profile outside it.
Optimization-based design of a heat flux concentrator
Peralta, Ignacio; Fachinotti, Víctor D.; Ciarbonetti, Ángel A.
2017-01-01
To gain control over the diffusive heat flux in a given domain, one needs to engineer a thermal metamaterial with a specific distribution of the generally anisotropic thermal conductivity throughout the domain. Until now, the appropriate conductivity distribution was usually determined using transformation thermodynamics. By this way, only a few particular cases of heat flux control in simple domains having simple boundary conditions were studied. Thermal metamaterials based on optimization algorithm provides superior properties compared to those using the previous methods. As a more general approach, we propose to define the heat control problem as an optimization problem where we minimize the error in guiding the heat flux in a given way, taking as design variables the parameters that define the variable microstructure of the metamaterial. In the present study we numerically demonstrate the ability to manipulate heat flux by designing a device to concentrate the thermal energy to its center without disturbing the temperature profile outside it. PMID:28084451
Optimization-based design of a heat flux concentrator
NASA Astrophysics Data System (ADS)
Peralta, Ignacio; Fachinotti, Víctor D.; Ciarbonetti, Ángel A.
2017-01-01
To gain control over the diffusive heat flux in a given domain, one needs to engineer a thermal metamaterial with a specific distribution of the generally anisotropic thermal conductivity throughout the domain. Until now, the appropriate conductivity distribution was usually determined using transformation thermodynamics. By this way, only a few particular cases of heat flux control in simple domains having simple boundary conditions were studied. Thermal metamaterials based on optimization algorithm provides superior properties compared to those using the previous methods. As a more general approach, we propose to define the heat control problem as an optimization problem where we minimize the error in guiding the heat flux in a given way, taking as design variables the parameters that define the variable microstructure of the metamaterial. In the present study we numerically demonstrate the ability to manipulate heat flux by designing a device to concentrate the thermal energy to its center without disturbing the temperature profile outside it.
Optimization of integer wavelet transforms based on difference correlation structures.
Li, Hongliang; Liu, Guizhong; Zhang, Zhongwei
2005-11-01
In this paper, a novel lifting integer wavelet transform based on difference correlation structure (DCCS-LIWT) is proposed. First, we establish a relationship between the performance of a linear predictor and the difference correlations of an image. The obtained results provide a theoretical foundation for the following construction of the optimal lifting filters. Then, the optimal prediction lifting coefficients in the sense of least-square prediction error are derived. DCCS-LIWT puts heavy emphasis on image inherent dependence. A distinct feature of this method is the use of the variance-normalized autocorrelation function of the difference image to construct a linear predictor and adapt the predictor to varying image sources. The proposed scheme also allows respective calculations of the lifting filters for the horizontal and vertical orientations. Experimental evaluation shows that the proposed method produces better results than the other well-known integer transforms for the lossless image compression.
Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization
NASA Astrophysics Data System (ADS)
Nitta, Naotaka; Takeda, Naoto
2008-05-01
The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.
R2-Based Multi/Many-Objective Particle Swarm Optimization
Toscano, Gregorio; Barron-Zambrano, Jose Hugo; Tello-Leal, Edgar
2016-01-01
We propose to couple the R2 performance measure and Particle Swarm Optimization in order to handle multi/many-objective problems. Our proposal shows that through a well-designed interaction process we could maintain the metaheuristic almost inalterable and through the R2 performance measure we did not use neither an external archive nor Pareto dominance to guide the search. The proposed approach is validated using several test problems and performance measures commonly adopted in the specialized literature. Results indicate that the proposed algorithm produces results that are competitive with respect to those obtained by four well-known MOEAs. Additionally, we validate our proposal in many-objective optimization problems. In these problems, our approach showed its main strength, since it could outperform another well-known indicator-based MOEA. PMID:27656200
Physics-Based Prognostics for Optimizing Plant Operation
Leonard J. Bond; Don B. Jarrell
2005-03-01
Scientists at the Pacific Northwest National Laboratory (PNNL) have examined the necessity for optimization of energy plant operation using 'DSOM{reg_sign}'--Decision Support Operation and Maintenance and this has been deployed at several sites. This approach has been expanded to include a prognostics components and tested on a pilot scale service water system, modeled on the design employed in a nuclear power plant. A key element in plant optimization is understanding and controlling the aging process of safety-specific nuclear plant components. This paper reports the development and demonstration of a physics-based approach to prognostic analysis that combines distributed computing, RF data links, the measurement of aging precursor metrics and their correlation with degradation rate and projected machine failure.
Physics-Based Prognostics for Optimizing Plant Operation
Jarrell, Donald B.; Bond, Leonard J.
2006-02-28
Scientists at the Pacific Northwest National Laboratory (PNNL) have examined the necessity for optimization of energy plant operation using “DSOM® ” – Decision Support Operation and Maintenance and this has been deployed at several sites. This approach has been expanded to include a prognostics component and tested on a pilot scale service water system, modeled on the design employed in a nuclear power plant. A key element in plant optimization is understanding and controlling the aging process of safety-specific nuclear plant components. This paper reports the development and demonstration of a physics-based approach to prognostic analysis that combines distributed computing, RF data links, the measurement of aging precursor metrics and their correlation with degradation rate and projected machine failure.
Credibility theory based dynamic control bound optimization for reservoir flood limited water level
NASA Astrophysics Data System (ADS)
Jiang, Zhiqiang; Sun, Ping; Ji, Changming; Zhou, Jianzhong
2015-10-01
The dynamic control operation of reservoir flood limited water level (FLWL) can solve the contradictions between reservoir flood control and beneficial operation well, and it is an important measure to make sure the security of flood control and realize the flood utilization. The dynamic control bound of FLWL is a fundamental key element for implementing reservoir dynamic control operation. In order to optimize the dynamic control bound of FLWL by considering flood forecasting error, this paper took the forecasting error as a fuzzy variable, and described it with the emerging credibility theory in recent years. By combining the flood forecasting error quantitative model, a credibility-based fuzzy chance constrained model used to optimize the dynamic control bound was proposed in this paper, and fuzzy simulation technology was used to solve the model. The FENGTAN reservoir in China was selected as a case study, and the results show that, compared with the original operation water level, the initial operation water level (IOWL) of FENGTAN reservoir can be raised 4 m, 2 m and 5.5 m respectively in the three division stages of flood season, and without increasing flood control risk. In addition, the rationality and feasibility of the proposed forecasting error quantitative model and credibility-based dynamic control bound optimization model are verified by the calculation results of extreme risk theory.
Ecker, Brett L; McMillan, Matthew T; Asbun, Horacio J; Ball, Chad G; Bassi, Claudio; Beane, Joal D; Behrman, Stephen W; Berger, Adam C; Dickson, Euan J; Bloomston, Mark; Callery, Mark P; Christein, John D; Dixon, Elijah; Drebin, Jeffrey A; Castillo, Carlos Fernandez-Del; Fisher, William E; Fong, Zhi Ven; Haverick, Ericka; Hollis, Robert H; House, Michael G; Hughes, Steven J; Jamieson, Nigel B; Javed, Ammar A; Kent, Tara S; Kowalsky, Stacy J; Kunstman, John W; Malleo, Giuseppe; Poruk, Katherine E; Salem, Ronald R; Schmidt, Carl R; Soares, Kevin; Stauffer, John A; Valero, Vicente; Velu, Lavanniya K P; Watkins, Amarra A; Wolfgang, Christopher L; Zureikat, Amer H; Vollmer, Charles M
2017-06-07
The aim of this study was to identify the optimal fistula mitigation strategy following pancreaticoduodenectomy. The utility of technical strategies to prevent clinically relevant postoperative pancreatic fistula (CR-POPF) following pancreatoduodenectomy (PD) may vary by the circumstances of the anastomosis. The Fistula Risk Score (FRS) identifies a distinct high-risk cohort (FRS 7 to 10) that demonstrates substantially worse clinical outcomes. The value of various fistula mitigation strategies in these particular high-stakes cases has not been previously explored. This multinational study included 5323 PDs performed by 62 surgeons at 17 institutions. Mitigation strategies, including both technique related (ie, pancreatogastrostomy reconstruction; dunking; tissue patches) and the use of adjuvant strategies (ie, intraperitoneal drains; anastomotic stents; prophylactic octreotide; tissue sealants), were evaluated using multivariable regression analysis and propensity score matching. A total of 522 (9.8%) PDs met high-risk FRS criteria, with an observed CR-POPF rate of 29.1%. Pancreatogastrostomy, prophylactic octreotide, and omission of externalized stents were each associated with an increased rate of CR-POPF (all P < 0.001). In a multivariable model accounting for patient, surgeon, and institutional characteristics, the use of external stents [odds ratio (OR) 0.45, 95% confidence interval (95% CI) 0.25-0.81] and the omission of prophylactic octreotide (OR 0.49, 95% CI 0.30-0.78) were independently associated with decreased CR-POPF occurrence. In the propensity score matched cohort, an "optimal" mitigation strategy (ie, externalized stent and no prophylactic octreotide) was associated with a reduced rate of CR-POPF (13.2% vs 33.5%, P < 0.001). The scenarios identified by the high-risk FRS zone represent challenging anastomoses associated with markedly elevated rates of fistula. Externalized stents and omission of prophylactic octreotide, in the setting of
Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy
Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.
2014-01-01
Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283
Yen, Amy Ming-Fang; Auvinen, Anssi; Schleutker, Johanna; Wu, Yi-Ying; Fann, Jean Ching-Yuan; Tammela, Teuvo; Chen, Sam Li-Sheng; Chiu, Sherry Yueh-Hsia; Chen, Hsiu-Hsi
2015-06-01
Risk-stratified screening for prostate cancer (PCa) with prostate-specific antigen (PSA) testing incorporating genetic variants has received some attention but has been scarcely investigated. We developed a model to stratify the Finnish population by different risk profiles related to genetic variants to optimize the screening policy. Data from the Finnish randomized controlled trial on screening for PCa with PSA testing were used to estimate a six-state Markov model of disease progression. Blood samples from Finnish men were used to assess the risk of PCa related to three genetic variants (rs4242382, rs138213197, and rs200331695). A risk score-based approach combined with a series of computer simulation models was applied to optimize individual screening policies. The 10-year risk of having progressive prostate cancer detected ranged from 43% in the top 5% risk group to approximately 11% in the bottom half of the population. Using the median group, with screening every four years beginning at 55 years-old, as the reference group, the recommended age beginning screening was approximately 47 years-old for the top 5% risk group and 55 years-old for those in the lower 60% risk group. The recommended interscreening interval has been shortened for individuals in the high risk group. The increased availability of genomic information allows the proposed multistate model to be more discriminating with respect to risk stratification and the suggested screening policy, particularly for the lowest risk groups-. -- A multi-state genetic variant-based model was developed for further application to population risk stratification to optimize the interscreening interval and the age at which to begin screening for PSA. A small sub-group of the population is likely to benefit from more intensive screening with early start and short interval, while half of the population is unlikely to benefit from such protocol (compared with four-year interval after age 55 years). © 2015 Wiley
Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.
2010-06-15
A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.
NASA Astrophysics Data System (ADS)
Mouton, S.; Ledoux, Y.; Teissandier, D.; Sébastian, P.
2010-06-01
A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM® and Samcef® softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.
Comparative Pessimism or Optimism: Depressed Mood, Risk-Taking, Social Utility and Desirability.
Milhabet, Isabelle; Le Barbenchon, Emmanuelle; Cambon, Laurent; Molina, Guylaine
2015-03-05
Comparative optimism can be defined as a self-serving, asymmetric judgment of the future. It is often thought to be beneficial and socially accepted, whereas comparative pessimism is correlated with depression and socially rejected. Our goal was to examine the social acceptance of comparative optimism and the social rejection of comparative pessimism in two dimensions of social judgment, social desirability and social utility, considering the attributions of dysphoria and risk-taking potential (studies 2 and 3) on outlooks on the future. In three experiments, the participants assessed either one (study 1) or several (studies 2 and 3) fictional targets in two dimensions, social utility and social desirability. Targets exhibiting comparatively optimistic or pessimistic outlooks on the future were presented as non-depressed, depressed, or neither (control condition) (study 1); non-depressed or depressed (study 2); and non-depressed or in control condition (study 3). Two significant results were obtained: (1) social rejection of comparative pessimism in the social desirability dimension, which can be explained by its depressive feature; and (2) comparative optimism was socially accepted on the social utility dimension, which can be explained by the perception that comparatively optimistic individuals are potential risk-takers.
Analysis of a pharmaceutical risk sharing agreement based on the purchaser's total budget.
Zaric, Gregory S; O'brien, Bernie J
2005-08-01
Many public and private healthcare payers use formularies as a tool for controlling drug costs and quality. Although the price per dose is often negotiated as part of the formulary listing, payers may still face unlimited financial risk if demand is much greater than expected at the time of listing. The requirement for drug manufacturers to submit a budget impact analysis as part of the drug approval process suggests that payers are concerned not only with the cost effectiveness of a proposed drug but also with the potential increase in total expenditures that may result from new formulary listings. In this paper we define and analyze a model for financial risk sharing based on the total budget. Our analysis focuses on optimal decision making by manufacturers in the presence of a specific risk sharing agreement. We derive a manufacturer's optimal statement of budget impact and discuss several properties of the optimal solution. (c) 2005 John Wiley & Sons, Ltd.
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
Managing simulation-based training: A framework for optimizing learning, cost, and time
NASA Astrophysics Data System (ADS)
Richmond, Noah Joseph
This study provides a management framework for optimizing training programs for learning, cost, and time when using simulation based training (SBT) and reality based training (RBT) as resources. Simulation is shown to be an effective means for implementing activity substitution as a way to reduce risk. The risk profile of 22 US Air Force vehicles are calculated, and the potential risk reduction is calculated under the assumption of perfect substitutability of RBT and SBT. Methods are subsequently developed to relax the assumption of perfect substitutability. The transfer effectiveness ratio (TER) concept is defined and modeled as a function of the quality of the simulator used, and the requirements of the activity trained. The Navy F/A-18 is then analyzed in a case study illustrating how learning can be maximized subject to constraints in cost and time, and also subject to the decision maker's preferences for the proportional and absolute use of simulation. Solution methods for optimizing multiple activities across shared resources are next provided. Finally, a simulation strategy including an operations planning program (OPP), an implementation program (IP), an acquisition program (AP), and a pedagogical research program (PRP) is detailed. The study provides the theoretical tools to understand how to leverage SBT, a case study demonstrating these tools' efficacy, and a set of policy recommendations to enable the US military to better utilize SBT in the future.
Weather forecast-based optimization of integrated energy systems.
Zavala, V. M.; Constantinescu, E. M.; Krause, T.; Anitescu, M.
2009-03-01
In this work, we establish an on-line optimization framework to exploit detailed weather forecast information in the operation of integrated energy systems, such as buildings and photovoltaic/wind hybrid systems. We first discuss how the use of traditional reactive operation strategies that neglect the future evolution of the ambient conditions can translate in high operating costs. To overcome this problem, we propose the use of a supervisory dynamic optimization strategy that can lead to more proactive and cost-effective operations. The strategy is based on the solution of a receding-horizon stochastic dynamic optimization problem. This permits the direct incorporation of economic objectives, statistical forecast information, and operational constraints. To obtain the weather forecast information, we employ a state-of-the-art forecasting model initialized with real meteorological data. The statistical ambient information is obtained from a set of realizations generated by the weather model executed in an operational setting. We present proof-of-concept simulation studies to demonstrate that the proposed framework can lead to significant savings (more than 18% reduction) in operating costs.
Optimal sensor placement using FRFs-based clustering method
NASA Astrophysics Data System (ADS)
Li, Shiqi; Zhang, Heng; Liu, Shiping; Zhang, Zhe
2016-12-01
The purpose of this work is to develop an optimal sensor placement method by selecting the most relevant degrees of freedom as actual measure position. Based on observation matrix of a structure's frequency response, two optimal criteria are used to avoid the information redundancy of the candidate degrees of freedom. By using principal component analysis, the frequency response matrix can be decomposed into principal directions and their corresponding singular. A relatively small number of principal directions will maintain a system's dominant response information. According to the dynamic similarity of each degree of freedom, the k-means clustering algorithm is designed to classify the degrees of freedom, and effective independence method deletes the sensors which are redundant of each cluster. Finally, two numerical examples and a modal test are included to demonstrate the efficient of the derived method. It is shown that the proposed method provides a way to extract sub-optimal sets and the selected sensors are well distributed on the whole structure.
Optimization-based mesh correction with volume and convexity constraints
D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; Bochev, Pavel; Shashkov, Mikhail
2016-02-24
In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
The effect of target group size on risk judgments and comparative optimism: the more, the riskier.
Price, Paul C; Smith, Andrew R; Lench, Heather C
2006-03-01
In 5 experiments, college students exhibited a group size effect on risk judgments. As the number of individuals in a target group increased, so did participants' judgments of the risk of the average member of the group for a variety of negative life events. This happened regardless of whether the stimuli consisted of photographs of real peers or stick-figure representations of peers. As a result, the degree to which participants exhibited comparative optimism (i.e., judged themselves to be at lower risk than their peers) also increased as the size of the comparison group increased. These results suggest that the typical comparative optimism effect reported so often in the literature might be, at least in part, a group size effect. Additional results include a group size effect on judgments of the likelihood that the average group member will experience positive and neutral events and a group size effect on perceptual judgments of the heights of stick figures. These latter results, in particular, support the existence of a simple, general cognitive mechanism that integrates stimulus numerosity into quantitative judgments about that stimulus.
Muteki, Koji; Swaminathan, Vidya; Sekulic, Sonja S; Reid, George L
2011-12-01
In pharmaceutical tablet manufacturing processes, a major source of disturbance affecting drug product quality is the (lot-to-lot) variability of the incoming raw materials. A novel modeling and process optimization strategy that compensates for raw material variability is presented. The approach involves building partial least squares models that combine raw material attributes and tablet process parameters and relate these to final tablet attributes. The resulting models are used in an optimization framework to then find optimal process parameters which can satisfy all the desired requirements for the final tablet attributes, subject to the incoming raw material lots. In order to de-risk the potential (lot-to-lot) variability of raw materials on the drug product quality, the effect of raw material lot variability on the final tablet attributes was investigated using a raw material database containing a large number of lots. In this way, the raw material variability, optimal process parameter space and tablet attributes are correlated with each other and offer the opportunity of simulating a variety of changes in silico without actually performing experiments. The connectivity obtained between the three sources of variability (materials, parameters, attributes) can be considered a design space consistent with Quality by Design principles, which is defined by the ICH-Q8 guidance (USDA 2006). The effectiveness of the methodologies is illustrated through a common industrial tablet manufacturing case study.
The role of hope and optimism in suicide risk for American Indians/Alaska Natives.
O'Keefe, Victoria M; Wingate, LaRicka R
2013-12-01
There are some American Indian/Alaska Native communities that exhibit high rates of suicide. The interpersonal theory of suicide (Joiner, 2005) posits that lethal suicidal behavior is likely preceded by the simultaneous presence of thwarted belongingness, perceived burdensomeness, and acquired capability. Past research has shown that hope and optimism are negatively related to suicidal ideation, some of the constructs in the interpersonal theory of suicide, and suicide risk for the general population. This is the first study to investigate hope and optimism in relation to suicidal ideation, thwarted belongingness, perceived burdensomeness, and acquired capability for American Indians/Alaska Natives. Results showed that hope and optimism negatively predicted thwarted belongingness, perceived burdensomeness, and suicidal ideation. However, these results were not found for acquired capability. Overall, this study suggests that higher levels of hope and optimism are associated with lower levels of suicidal ideation, thwarted belongingness, and perceived burdensomeness in this American Indian/Alaska Native sample. © 2013 The American Association of Suicidology.
Beard, Mary K
2012-01-01
To review current use of bisphosphonates as first-line therapy for osteoporosis, with an emphasis on the importance of patient compliance and persistence. The US National Library of Medicine was used to obtain the relevant information on current bisphosphonate treatment for osteoporosis management, and patient compliance and persistence with treatment. Bisphosphonates have demonstrated efficacy in fracture risk reduction, although differences may exist with respect to both onset of action and the site of fracture risk reduction. Good compliance and persistence with osteoporosis therapy is needed to reduce fracture risk, but currently the willingness of patients to conform to their prescribed course of treatment is suboptimal. Intermittent dosing schedules have been developed to facilitate ease of medication-taking in order to help improve rates of compliance and persistence. When primary care physicians provide patients with information about the established efficacy and safety of medications, as well as clarifying the crucial link between continued, consistent treatment and fracture risk reduction, patients are more likely to understand the importance of taking their medications consistently in order to maximize the effectiveness of the therapy. A therapy that provides vertebral and nonvertebral efficacy, is well-tolerated, and offers a flexible dosing regimen is likely to enhance patient compliance and persistence, and provide optimal fracture protection. Numerous studies have consistently demonstrated that medication compliance and persistence are well-correlated with fracture risk reduction.
Research on Topology Optimization of Truss Structures Based on the Improved Group Search Optimizer
NASA Astrophysics Data System (ADS)
Haobin, Xie; Feng, Liu; Lijuan, Li; Chun, Wang
2010-05-01
In this paper, a novel optimization algorithm, named group search optimizer (GSO), is used to truss structure topology optimization. The group search optimizer is improved in two aspects which including using harmony memory and adhering to the boundary. Two topology methods, such as heuristic topology and discretization of topology variables, are incorporated with GSO to make sure that the topology optimization works well. In the end of the paper, two numerical examples were used to test the improved GSO. Calculation results show that the improved GSO is feasible and robust for truss topology optimization.
Shieh, Yiwey; Eklund, Martin; Madlensky, Lisa; Sawyer, Sarah D; Thompson, Carlie K; Stover Fiscalini, Allison; Ziv, Elad; Van't Veer, Laura J; Esserman, Laura J; Tice, Jeffrey A
2017-01-01
Ongoing controversy over the optimal approach to breast cancer screening has led to discordant professional society recommendations, particularly in women age 40 to 49 years. One potential solution is risk-based screening, where decisions around the starting age, stopping age, frequency, and modality of screening are based on individual risk to maximize the early detection of aggressive cancers and minimize the harms of screening through optimal resource utilization. We present a novel approach to risk-based screening that integrates clinical risk factors, breast density, a polygenic risk score representing the cumulative effects of genetic variants, and sequencing for moderate- and high-penetrance germline mutations. We demonstrate how thresholds of absolute risk estimates generated by our prediction tools can be used to stratify women into different screening strategies (biennial mammography, annual mammography, annual mammography with adjunctive magnetic resonance imaging, defer screening at this time) while informing the starting age of screening for women age 40 to 49 years. Our risk thresholds and corresponding screening strategies are based on current evidence but need to be tested in clinical trials. The Women Informed to Screen Depending On Measures of risk (WISDOM) Study, a pragmatic, preference-tolerant randomized controlled trial of annual vs personalized screening, will study our proposed approach. WISDOM will evaluate the efficacy, safety, and acceptability of risk-based screening beginning in the fall of 2016. The adaptive design of this trial allows continued refinement of our risk thresholds as the trial progresses, and we discuss areas where we anticipate emerging evidence will impact our approach.
Optimization and determination of polycyclic aromatic hydrocarbons in biochar-based fertilizers.
Chen, Ping; Zhou, Hui; Gan, Jay; Sun, Mingxing; Shang, Guofeng; Liu, Liang; Shen, Guoqing
2015-03-01
The agronomic benefit of biochar has attracted widespread attention to biochar-based fertilizers. However, the inevitable presence of polycyclic aromatic hydrocarbons in biochar is a matter of concern because of the health and ecological risks of these compounds. The strong adsorption of polycyclic aromatic hydrocarbons to biochar complicates their analysis and extraction from biochar-based fertilizers. In this study, we optimized and validated a method for determining the 16 priority polycyclic aromatic hydrocarbons in biochar-based fertilizers. Results showed that accelerated solvent extraction exhibited high extraction efficiency. Based on a Box-Behnken design with a triplicate central point, accelerated solvent extraction was used under the following optimal operational conditions: extraction temperature of 78°C, extraction time of 17 min, and two static cycles. The optimized method was validated by assessing the linearity of analysis, limit of detection, limit of quantification, recovery, and application to real samples. The results showed that the 16 polycyclic aromatic hydrocarbons exhibited good linearity, with a correlation coefficient of 0.996. The limits of detection varied between 0.001 (phenanthrene) and 0.021 mg/g (benzo[ghi]perylene), and the limits of quantification varied between 0.004 (phenanthrene) and 0.069 mg/g (benzo[ghi]perylene). The relative recoveries of the 16 polycyclic aromatic hydrocarbons were 70.26-102.99%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimal Constellation Design for Satellite Based Augmentation System
NASA Astrophysics Data System (ADS)
Kawano, Isao
Global Positioning System (GPS) is widely utilized in daily life, for instance car navigation. Wide Area Augmentation System (WAAS) and Local Area Augmentation System (LAAS) are proposed so as to provide GPS better navigation accuracy and integrity capability. Satellite Based Augmentation System (SBAS) is a kind of WAAS and Multi-functional Transportation Satellite (MTSAT) has been developed in Japan. To improve navigation accuracy most efficiently, augmentation satellites should be so placed that minimize Geometric Dilution of Precision (GDOP) of constellation. In this paper the result of optimal constellation design for SBAS is shown.
Morphing-Based Shape Optimization in Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Rousseau, Yannick; Men'Shov, Igor; Nakamura, Yoshiaki
In this paper, a Morphing-based Shape Optimization (MbSO) technique is presented for solving Optimum-Shape Design (OSD) problems in Computational Fluid Dynamics (CFD). The proposed method couples Free-Form Deformation (FFD) and Evolutionary Computation, and, as its name suggests, relies on the morphing of shape and computational domain, rather than direct shape parameterization. Advantages of the FFD approach compared to traditional parameterization are first discussed. Then, examples of shape and grid deformations by FFD are presented. Finally, the MbSO approach is illustrated and applied through an example: the design of an airfoil for a future Mars exploration airplane.
An internet graph model based on trade-off optimization
NASA Astrophysics Data System (ADS)
Alvarez-Hamelin, J. I.; Schabanel, N.
2004-03-01
This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.
A filter-based evolutionary algorithm for constrained optimization.
Clevenger, Lauren M.; Hart, William Eugene; Ferguson, Lauren Ann
2004-02-01
We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.
LEE, Chang Jun
2015-01-01
In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study. PMID:26027708
Lee, Chang Jun
2015-01-01
In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study.
Zou, Feng; Chen, Debao; Wang, Jiangtao
2016-01-01
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157
Head, Linden; Nessim, Carolyn; Boyd, Kirsty Usher
2017-01-01
Background Bilateral prophylactic mastectomy (BPM) has shown breast cancer risk reduction in high-risk/BRCA+ patients. However, priority of active cancers coupled with inefficient use of operating room (OR) resources presents challenges in offering BPM in a timely manner. To address these challenges, a rapid access prophylactic mastectomy and immediate reconstruction (RAPMIR) program was innovated. The purpose of this study was to evaluate RAPMIR with regards to access to care and efficiency. Methods We retrospectively reviewed the cases of all high-risk/BRCA+ patients having had BPM between September 2012 and August 2014. Patients were divided into 2 groups: those managed through the traditional model and those managed through the RAPMIR model. RAPMIR leverages 2 concurrently running ORs with surgical oncology and plastic surgery moving between rooms to complete 3 combined BPMs with immediate reconstruction in addition to 1–2 independent cases each operative day. RAPMIR eligibility criteria included high-risk/BRCA+ status; BPM with immediate, implant-based reconstruction; and day surgery candidacy. Wait times, case volumes and patient throughput were measured and compared. Results There were 16 traditional patients and 13 RAPMIR patients. Mean wait time (days from referral to surgery) for RAPMIR was significantly shorter than for the traditional model (165.4 v. 309.2 d, p = 0.027). Daily patient throughput (4.3 v. 2.8), plastic surgery case volume (3.7 v. 1.6) and surgical oncology case volume (3.0 v. 2.2) were significantly greater in the RAPMIR model than the traditional model (p = 0.003, p < 0.001 and p = 0.015, respectively). Conclusion A multidisciplinary model with optimized scheduling has the potential to improve access to care and optimize resource utilization. PMID:28234588
Estimation of the effects of normal tissue sparing using equivalent uniform dose-based optimization
Senthilkumar, K.; Maria Das, K. J.; Balasubramanian, K.; Deka, A. C.; Patil, B. R.
2016-01-01
In this study, we intend to estimate the effects of normal tissue sparing between intensity modulated radiotherapy (IMRT) treatment plans generated with and without a dose volume (DV)-based physical cost function using equivalent uniform dose (EUD). Twenty prostate cancer patients were retrospectively selected for this study. For each patient, two IMRT plans were generated (i) EUD-based optimization with a DV-based physical cost function to control inhomogeneity (EUDWith DV) and (ii) EUD-based optimization without a DV-based physical cost function to allow inhomogeneity (EUDWithout DV). The generated plans were prescribed a dose of 72 Gy in 36 fractions to planning target volume (PTV). Mean dose, D30%, and D5% were evaluated for all organ at risk (OAR). Normal tissue complication probability was also calculated for all OARs using BioSuite software. The average volume of PTV for all patients was 103.02 ± 27 cm3. The PTV mean dose for EUDWith DV plans was 73.67 ± 1.7 Gy, whereas for EUDWithout DV plans was 80.42 ± 2.7 Gy. It was found that PTV volume receiving dose more than 115% of prescription dose was negligible in EUDWith DV plans, whereas it was 28% in EUDWithout DV plans. In almost all dosimetric parameters evaluated, dose to OARs in EUDWith DV plans was higher than in EUDWithout DV plans. Allowing inhomogeneous dose (EUDWithout DV) inside the target would achieve better normal tissue sparing compared to homogenous dose distribution (EUDWith DV). Hence, this inhomogeneous dose could be intentionally dumped on the high-risk volume to achieve high local control. Therefore, it was concluded that EUD optimized plans offer added advantage of less OAR dose as well as selectively boosting dose to gross tumor volume. PMID:27217624
Yau, Christina; Sninsky, John; Kwok, Shirley; Wang, Alice; Degnim, Amy; Ingle, James N; Gillett, Cheryl; Tutt, Andrew; Waldman, Fred; Moore, Dan; Esserman, Laura; Benz, Christopher C
2013-01-01
Outcome predictors in use today are prognostic only for hormone receptor-positive (HRpos) breast cancer. Although microarray-derived multigene predictors of hormone receptor-negative (HRneg) and/or triple negative (Tneg) breast cancer recurrence risk are emerging, to date none have been transferred to clinically suitable assay platforms (for example, RT-PCR) or validated against formalin-fixed paraffin-embedded (FFPE) HRneg/Tneg samples. Multiplexed RT-PCR was used to assay two microarray-derived HRneg/Tneg prognostic signatures IR-7 and Buck-4) in a pooled FFPE collection of 139 chemotherapy-naïve HRneg breast cancers. The prognostic value of the RTPCR measured gene signatures were evaluated as continuous and dichotomous variables, and in conditional risk models incorporating clinical parameters. An optimized five-gene index was derived by evaluating gene combinations from both signatures. RT-PCR measured IR-7 and Buck-4 signatures proved prognostic as continuous variables; and conditional risk modeling chose nodal status, the IR-7 signature, and tumor grade as significant predictors of distant recurrence (DR). From the Buck-4 and IR-7 signatures, an optimized five-gene (TNFRSF17, CLIC5, HLA-F, CXCL13, XCL2) predictor was generated, referred to as the Integrated Cytokine Score (ICS) based on its functional pathway linkage through interferon-γ and IL-10. Across all FFPE cases, the ICS was prognostic as either a continuous or dichotomous variable, and conditional risk modeling selected nodal status and ICS as DR predictors. Further dichotomization of node-negative/ICS-low FFPE cases identified a subset of low-grade HRneg tumors with <10% 5-year DR risk. The prognostic value of ICS was reaffirmed in two previously studied microarray assayed cohorts containing 274 node-negative and chemotherapy naive HRneg breast cancers, including 95 Tneg cases where it proved prognostically independent of Tneg molecular subtyping. In additional HRneg/Tneg microarray assayed
Model based optimization of wind erosion control by tree shelterbelt for suitable land management
NASA Astrophysics Data System (ADS)
Bartus, M.; Farsang, A.; Szatmári, J.; Barta, K.
2012-04-01
The degradation of soil by wind erosion causes huge problem in many parts of the world. The wind erodes the upper, nutrition rich part of the soil, therefore erosion causes soil productivity loss. The length of tree shelterbelts was significantly reduced by the collectivisation (1960-1989) and the wind erosion affected areas expanded in Hungary. The tree shelterbelt is more than just a tool of wind erosion control; by good planning it can increase the yield. The tree shelterbelt reduces the wind speed and changes the microclimate providing better condition to plant growth. The aim of our work is to estimate wind erosion risk and to find the way to reduce it by tree shelterbelts. A GIS based model was created to calculate the risk and the optimal windbreak position was defined to reduce the wind erosion risk to the minimum. The model is based on the DIN 19706 (Ermitlung der Erosiongefährdung von Böden durch Wind, Estimation of Wind Erosion Risk) German standard. The model uses five input data: structure and carbon content of soil, average yearly wind speed at 10 meters height, the cultivated plants and the height and position of windbreak. The study field (16km2) was chosen near Szeged (SE Hungary). In our investigation, the cultivated plant species and the position and height of windbreaks were modified. Different scenarios were made using the data of the land management in the last few years. The best case scenario (zero wind erosion) and the worst case scenario (with no tree shelter belt and the worst land use) were made to find the optimal windbreak position. Finally, the research proved that the tree shelterbelts can provide proper protection against wind erosion, but for optimal land management the cultivated plant types should also controlled. As a result of the research, a land management plan was defined to reduce the wind erosion risk on the study field, which contains the positions of new tree shelterbelts planting and the optimal cultivation.
12 CFR 932.3 - Risk-based capital requirement.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement. (a...
12 CFR 932.3 - Risk-based capital requirement.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement. (a...
Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao
2014-09-01
Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP.
Lucero, V.; Meale, B.M.; Purser, F.E.
1990-01-01
The analysis discussed in this paper was performed as part of the buried waste remediation efforts at the Idaho National Engineering Laboratory (INEL). The specific type of remediation discussed herein involves a thermal treatment process for converting contaminated soil and waste into a stable, chemically-inert form. Models of the proposed process were developed using probabilistic risk assessment (PRA) fault tree and event tree modeling techniques. The models were used to determine the appropriateness of the conceptual design by identifying potential hazards of system operations. Additional models were developed to represent the reliability aspects of the system components. By performing various sensitivities with the models, optimal design modifications are being identified to substantiate an integrated, cost-effective design representing minimal risk to the environment and/or public with maximum component reliability. 4 figs.
Optimal Scaling of HIV-Related Sexual Risk Behaviors in Ethnically Diverse Homosexually Active Men
Cochran, Susan D.; de Leeuw, Jan; Mays, Vickie M.
2014-01-01
As HIV-related behavioral research moves increasingly in the direction of seeking to determine predictors of high-risk sexual behavior, more efficient methods of specifying patterns are needed. Two statistical techniques, homogeneity analysis and latent class analysis, useful in scaling binary multivariate data profiles are presented. Both were used to analyze reported sexual behavior patterns in two samples of homosexually active men, one sample of 343 primarily White gay men attending an HIV workshop and one sample of 837 African American gay men recruited nationally. Results support the existence of a single, nonlinear, latent dimension underlying male homosexual behaviors consistent with HIV-related risk taking. Both statistical methods provide an efficient means to optimally scale sexual behavior patterns, a critical outcome variable in HIV-related research. PMID:7751488
A Localization Method for Multistatic SAR Based on Convex Optimization
2015-01-01
In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031
A Localization Method for Multistatic SAR Based on Convex Optimization.
Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu
2015-01-01
In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.
Optimization-based multiple-point geostatistics: A sparse way
NASA Astrophysics Data System (ADS)
Kalantari, Sadegh; Abdollahifard, Mohammad Javad
2016-10-01
In multiple-point simulation the image should be synthesized consistent with the given training image and hard conditioning data. Existing sequential simulation methods usually lead to error accumulation which is hardly manageable in future steps. Optimization-based methods are capable of handling inconsistencies by iteratively refining the simulation grid. In this paper, the multiple-point stochastic simulation problem is formulated in an optimization-based framework using a sparse model. Sparse model allows each patch to be constructed as a superposition of a few atoms of a dictionary formed using training patterns, leading to a significant increase in the variability of the patches. To control the creativity of the model, a local histogram matching method is proposed. Furthermore, effective solutions are proposed for different issues arisen in multiple-point simulation. In order to handle hard conditioning data a weighted matching pursuit method is developed in this paper. Moreover, a simple and efficient thresholding method is developed which allows working with categorical variables. The experiments show that the proposed method produces acceptable realizations in terms of pattern reproduction, increases the variability of the realizations, and properly handles numerous conditioning data.
Multi-objective reliability-based optimization with stochastic metamodels.
Coelho, Rajan Filomeno; Bouillard, Philippe
2011-01-01
This paper addresses continuous optimization problems with multiple objectives and parameter uncertainty defined by probability distributions. First, a reliability-based formulation is proposed, defining the nondeterministic Pareto set as the minimal solutions such that user-defined probabilities of nondominance and constraint satisfaction are guaranteed. The formulation can be incorporated with minor modifications in a multiobjective evolutionary algorithm (here: the nondominated sorting genetic algorithm-II). Then, in the perspective of applying the method to large-scale structural engineering problems--for which the computational effort devoted to the optimization algorithm itself is negligible in comparison with the simulation--the second part of the study is concerned with the need to reduce the number of function evaluations while avoiding modification of the simulation code. Therefore, nonintrusive stochastic metamodels are developed in two steps. First, for a given sampling of the deterministic variables, a preliminary decomposition of the random responses (objectives and constraints) is performed through polynomial chaos expansion (PCE), allowing a representation of the responses by a limited set of coefficients. Then, a metamodel is carried out by kriging interpolation of the PCE coefficients with respect to the deterministic variables. The method has been tested successfully on seven analytical test cases and on the 10-bar truss benchmark, demonstrating the potential of the proposed approach to provide reliability-based Pareto solutions at a reasonable computational cost.
Discrete-Time ARMAv Model-Based Optimal Sensor Placement
Song Wei; Dyke, Shirley J.
2008-07-08
This paper concentrates on the optimal sensor placement problem in ambient vibration based structural health monitoring. More specifically, the paper examines the covariance of estimated parameters during system identification using auto-regressive and moving average vector (ARMAv) model. By utilizing the discrete-time steady state Kalman filter, this paper realizes the structure's finite element (FE) model under broad-band white noise excitations using an ARMAv model. Based on the asymptotic distribution of the parameter estimates of the ARMAv model, both a theoretical closed form and a numerical estimate form of the covariance of the estimates are obtained. Introducing the information entropy (differential entropy) measure, as well as various matrix norms, this paper attempts to find a reasonable measure to the uncertainties embedded in the ARMAv model estimates. Thus, it is possible to select the optimal sensor placement that would lead to the smallest uncertainties during the ARMAv identification process. Two numerical examples are provided to demonstrate the methodology and compare the sensor placement results upon various measures.
[Study on the land use optimization based on PPI].
Wu, Xiao-Feng; Li, Ting
2012-03-01
Land use type and managing method which is greatly influenced by human activities, is one of the most important factors of non-point pollution. Based on the collection and analysis of non-point pollution control methods and the concept of the three ecological fronts, 9 land use optimized scenarios were designed according to rationality analysis of the current land use situation in the 3 typed small watersheds in Miyun reservoir basin. Take Caojialu watershed for example to analyze and compare the influence to environment of different scenarios based on potential pollution index (PPI) and river section potential pollution index (R-PPI) and the best combination scenario was found. Land use scenario designing and comparison on basis of PPI and R-PPI could help to find the best combination scenario of land use type and managing method, to optimize space distribution and managing methods of land use in basin, to reduce soil erosion and to provide powerful support to formulation of land use planning and pollution control project.
Optimal pattern distributions in Rete-based production systems
NASA Technical Reports Server (NTRS)
Scott, Stephen L.
1994-01-01
Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.
Audio coding based on rate distortion and perceptual optimization
NASA Astrophysics Data System (ADS)
Erne, Markus; Moschytz, George
2000-04-01
The time-frequency tiling, bit allocation and the quantizer of most perceptual coding algorithms is either fixed or controlled by a perceptual mode. The large variety of existing audio signals, each exhibiting different coding requirements due to their different temporal and spectral fine-structure suggests to use a signal-adaptive algorithm. The framework which is described in this is paper makes use of a signal-adaptive wavelet filterbank which allows to switch any node of the wavelet-packet tree individually. Therefore each subband can have an individual time- segmentation and the overall time-frequency tiling can be adapted to the signal using optimization techniques. A rate- distortion optimality can be defined which will minimize the distortion for a given rate in every subband, based on a perceptual model. Due to the additivity of the rate and distortion measure over disjoint covers of the input signal, an overall cost function including the switching cost for the filterbank switching can be defined. By the use of dynamic programming techniques, the wavelet-packet tree can be pruned base don a top-down or bottom-up 'split-merge' decision in every node of the wavelet-tree. Additionally we can profit form temporal masking due to the fact that each subband can have an individual segmentation in time without introducing time domain artifacts such as pre-echo distortion.
Structure Learning for Deep Neural Networks Based on Multiobjective Optimization.
Liu, Jia; Gong, Maoguo; Miao, Qiguang; Wang, Xiaogang; Li, Hao
2017-05-05
This paper focuses on the connecting structure of deep neural networks and proposes a layerwise structure learning method based on multiobjective optimization. A model with better generalization can be obtained by reducing the connecting parameters in deep networks. The aim is to find the optimal structure with high representation ability and better generalization for each layer. Then, the visible data are modeled with respect to structure based on the products of experts. In order to mitigate the difficulty of estimating the denominator in PoE, the denominator is simplified and taken as another objective, i.e., the connecting sparsity. Moreover, for the consideration of the contradictory nature between the representation ability and the network connecting sparsity, the multiobjective model is established. An improved multiobjective evolutionary algorithm is used to solve this model. Two tricks are designed to decrease the computational cost according to the properties of input data. The experiments on single-layer level, hierarchical level, and application level demonstrate the effectiveness of the proposed algorithm, and the learned structures can improve the performance of deep neural networks.
Research on Taxiway Path Optimization Based on Conflict Detection
Zhou, Hang; Jiang, Xinxin
2015-01-01
Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency. PMID:26226485
Design optimization of PVDF-based piezoelectric energy harvesters.
Song, Jundong; Zhao, Guanxing; Li, Bo; Wang, Jin
2017-09-01
Energy harvesting is a promising technology that powers the electronic devices via scavenging the ambient energy. Piezoelectric energy harvesters have attracted considerable interest for their high conversion efficiency and easy fabrication in minimized sensors and transducers. To improve the output capability of energy harvesters, properties of piezoelectric materials is an influential factor, but the potential of the material is less likely to be fully exploited without an optimized configuration. In this paper, an optimization strategy for PVDF-based cantilever-type energy harvesters is proposed to achieve the highest output power density with the given frequency and acceleration of the vibration source. It is shown that the maximum power output density only depends on the maximum allowable stress of the beam and the working frequency of the device, and these two factors can be obtained by adjusting the geometry of piezoelectric layers. The strategy is validated by coupled finite-element-circuit simulation and a practical device. The fabricated device within a volume of 13.1 mm(3) shows an output power of 112.8 μW which is comparable to that of the best-performing piezoceramic-based energy harvesters within the similar volume reported so far.
CFD-Based Design Optimization Tool Developed for Subsonic Inlet
NASA Technical Reports Server (NTRS)
1995-01-01
The traditional approach to the design of engine inlets for commercial transport aircraft is a tedious process that ends with a less-than-optimum design. With the advent of high-speed computers and the availability of more accurate and reliable computational fluid dynamics (CFD) solvers, numerical optimization processes can effectively be used to design an aerodynamic inlet lip that enhances engine performance. The designers' experience at Boeing Corporation showed that for a peak Mach number on the inlet surface beyond some upper limit, the performance of the engine degrades excessively. Thus, our objective was to optimize efficiency (minimize the peak Mach number) at maximum cruise without compromising performance at other operating conditions. Using a CFD code NPARC, the NASA Lewis Research Center, in collaboration with Boeing, developed an integrated procedure at Lewis to find the optimum shape of a subsonic inlet lip and a numerical optimization code, ADS. We used a GRAPE-based three-dimensional grid generator to help automate the optimization procedure. The inlet lip shape at the crown and the keel was described as a superellipse, and the superellipse exponents and radii ratios were considered as design variables. Three operating conditions: cruise, takeoff, and rolling takeoff, were considered in this study. Three-dimensional Euler computations were carried out to obtain the flow field. At the initial design, the peak Mach numbers for maximum cruise, takeoff, and rolling takeoff conditions were 0.88, 1.772, and 1.61, respectively. The acceptable upper limits on the takeoff and rolling takeoff Mach numbers were 1.55 and 1.45. Since the initial design provided by Boeing was found to be optimum with respect to the maximum cruise condition, the sum of the peak Mach numbers at takeoff and rolling takeoff were minimized in the current study while the maximum cruise Mach number was constrained to be close to that at the existing design. With this objective, the
Vision-based coaching: optimizing resources for leader development.
Passarelli, Angela M
2015-01-01
Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader's development fail to leverage the benefits of the individual's personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader's personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader's identity, increased vitality, activation of learning goals, and a promotion-orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed.
Vision-based coaching: optimizing resources for leader development
Passarelli, Angela M.
2015-01-01
Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the benefits of the individual’s personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion–orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed. PMID:25926803
A Triangle Mesh Standardization Method Based on Particle Swarm Optimization
Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang
2016-01-01
To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129
Klinke, Andreas; Renn, Ortwin
2002-12-01
Our concept of nine risk evaluation criteria, six risk classes, a decision tree, and three management categories was developed to improve the effectiveness, efficiency, and political feasibility of risk management procedures. The main task of risk evaluation and management is to develop adequate tools for dealing with the problems of complexity, uncertainty. and ambiguity. Based on the characteristics of different risk types and these three major problems, we distinguished three types of management--risk-based, precaution-based, and discourse-based strategies. The risk-based strategy--is the common solution to risk problems. Once the probabilities and their corresponding damage potentials are calculated, risk managers are required to set priorities according to the severity of the risk, which may be operationalized as a linear combination of damage and probability or as a weighted combination thereof. Within our new risk classification, the two central components have been augmented with other physical and social criteria that still demand risk-based strategies as long as uncertainty is low and ambiguity absent. Risk-based strategies are best solutions to problems of complexity and some components of uncertainty, for example, variation among individuals. If the two most important risk criteria, probability of occurrence and extent of damage, are relatively well known and little uncertainty is left, the traditional risk-based approach seems reasonable. If uncertainty plays a large role, in particular, indeterminacy or lack of knowledge, the risk-based approach becomes counterproductive. Judging the relative severity of risks on the basis of uncertain parameters does not make much sense. Under these circumstances, management strategies belonging to the precautionary management style are required. The precautionary approach has been the basis for much of the European environmental and health protection legislation and regulation. Our own approach to risk management
Genetics algorithm optimization of DWT-DCT based image Watermarking
NASA Astrophysics Data System (ADS)
Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan
2017-01-01
Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.
Development of optimization-based probabilistic earthquake scenarios for the city of Tehran
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Peyghaleh, E.
2016-01-01
This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less
SADA: Ecological Risk Based Decision Support System for Selective Remediation
Spatial Analysis and Decision Assistance (SADA) is freeware that implements terrestrial ecological risk assessment and yields a selective remediation design using its integral geographical information system, based on ecological and risk assessment inputs. Selective remediation ...
SADA: Ecological Risk Based Decision Support System for Selective Remediation
Spatial Analysis and Decision Assistance (SADA) is freeware that implements terrestrial ecological risk assessment and yields a selective remediation design using its integral geographical information system, based on ecological and risk assessment inputs. Selective remediation ...
Optimizing bulk milk dioxin monitoring based on costs and effectiveness.
Lascano-Alcoser, V H; Velthuis, A G J; van der Fels-Klerx, H J; Hoogenboom, L A P; Oude Lansink, A G J M
2013-07-01
concentration equal to the EC maximum level. This study shows that the effectiveness of finding an incident depends not only on the ratio at which, for testing, collected truck samples are mixed into a pooled sample (aiming at detecting certain concentration), but also the number of collected truck samples. In conclusion, the optimal cost-effective monitoring depends on the number of contaminated farms and the concentration aimed at detection. The models and study results offer quantitative support to risk managers of food industries and food safety authorities. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Risk-assessment-based approach to patients exposed to endoscopes contaminated with Pseudomonas spp.
Robertson, P; Smith, A; Mead, A; Smith, I; Khanna, N; Wright, P; Joannidis, P; Boyd, S; Anderson, M; Hamilton, A; Shaw, D; Stewart, A
2015-05-01
Patients exposed to bronchoscopes contaminated with Pseudomonas aeruginosa are at increased risk of pseudomonal infection. The optimal methods for management and mitigation of risk following exposure are controversial. This article describes a two-phase risk assessment following pseudomonal contamination of a family of 75 endoscopes, detected through routine surveillance and attributed to one endoscope washer-disinfector. An initial risk assessment identified 18 endoscopes as high risk, based on the presence of lumens used for irrigation or biopsy. Exposure was communicated to the patients' clinical teams and a further clinical risk assessment of the exposed patients was performed. No patients developed complications due to pseudomonal infection. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Nonlinear model predictive control based on collective neurodynamic optimization.
Yan, Zheng; Wang, Jun
2015-04-01
In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach.
ERIC Educational Resources Information Center
Spitzenstetter, Florence; Schimchowitsch, Sarah
2012-01-01
By introducing a response-time measure in the field of comparative optimism, this study was designed to explore how people estimate risk to self and others depending on the evaluation order (self/other or other/self). Our results show the interdependency between self and other answers. Indeed, while response time for risk assessment for the self…
White, Ryan J; Phares, Noelle; Lubin, Arica A; Xiao, Yi; Plaxco, Kevin W
2008-09-16
Electrochemical, aptamer-based (E-AB) sensors, which are comprised of an electrode modified with surface immobilized, redox-tagged DNA aptamers, have emerged as a promising new biosensor platform. In order to further improve this technology we have systematically studied the effects of probe (aptamer) packing density, the AC frequency used to interrogate the sensor, and the nature of the self-assembled monolayer (SAM) used to passivate the electrode on the performance of representative E-AB sensors directed against the small molecule cocaine and the protein thrombin. We find that, by controlling the concentration of aptamer employed during sensor fabrication, we can control the density of probe DNA molecules on the electrode surface over an order of magnitude range. Over this range, the gain of the cocaine sensor varies from 60% to 200%, with maximum gain observed near the lowest probe densities. In contrast, over a similar range, the signal change of the thrombin sensor varies from 16% to 42% and optimal signaling is observed at intermediate densities. Above cut-offs at low hertz frequencies, neither sensor displays any significant dependence on the frequency of the alternating potential employed in their interrogation. Finally, we find that E-AB signal gain is sensitive to the nature of the alkanethiol SAM employed to passivate the interrogating electrode; while thinner SAMs lead to higher absolute sensor currents, reducing the length of the SAM from 6-carbons to 2-carbons reduces the observed signal gain of our cocaine sensor 10-fold. We demonstrate that fabrication and operational parameters can be varied to achieve optimal sensor performance and that these can serve as a basic outline for future sensor fabrication.
White, Ryan J.; Phares, Noelle; Lubin, Arica A.; Xiao, Yi; Plaxco, Kevin W.
2009-01-01
Electrochemical, aptamer-based (E-AB) sensors, which are comprised of an electrode modified with surface immobilized, redox-tagged DNA aptamers, have emerged as a promising new biosensor platform. In order to further improve this technology we have systematically studied the effects of probe (aptamer) packing density, the AC frequency used to interrogate the sensor, and the nature of the self-assembled monolayer (SAM) used to passivate the electrode on the performance of representative E-AB sensors directed against the small molecule cocaine and the protein thrombin. We find that, by controlling the concentration of aptamer employed during sensor fabrication, we can control the density of probe DNA molecules on the electrode surface over an order of magnitude range. Over this range, the gain of the cocaine sensor varies from 60% to 200%, with maximum gain observed near the lowest probe densities. In contrast, over a similar range, the signal change of the thrombin sensor varies from 16% to 42% and optimal signaling is observed at intermediate densities. Above cut-offs at low hertz frequencies, neither sensor displays any significant dependence on the frequency of the alternating potential employed in their interrogation. Finally, we find that E-AB signal gain is sensitive to the nature of the alkanethiol SAM employed to passivate the interrogating electrode; while thinner SAMs lead to higher absolute sensor currents, reducing the length of the SAM from 6-carbons to 2-carbons reduces the observed signal gain of our cocaine sensor 10-fold. We demonstrate that fabrication and operational parameters can be varied to achieve optimal sensor performance and that these can serve as a basic outline for future sensor fabrication. PMID:18690727
The biopharmaceutics risk assessment roadmap for optimizing clinical drug product performance.
Selen, Arzu; Dickinson, Paul A; Müllertz, Anette; Crison, John R; Mistry, Hitesh B; Cruañes, Maria T; Martinez, Marilyn N; Lennernäs, Hans; Wigal, Tim L; Swinney, David C; Polli, James E; Serajuddin, Abu T M; Cook, Jack A; Dressman, Jennifer B
2014-11-01
The biopharmaceutics risk assessment roadmap (BioRAM) optimizes drug product development and performance by using therapy-driven target drug delivery profiles as a framework to achieve the desired therapeutic outcome. Hence, clinical relevance is directly built into early formulation development. Biopharmaceutics tools are used to identify and address potential challenges to optimize the drug product for patient benefit. For illustration, BioRAM is applied to four relatively common therapy-driven drug delivery scenarios: rapid therapeutic onset, multiphasic delivery, delayed therapeutic onset, and maintenance of target exposure. BioRAM considers the therapeutic target with the drug substance characteristics and enables collection of critical knowledge for development of a dosage form that can perform consistently for meeting the patient's needs. Accordingly, the key factors are identified and in vitro, in vivo, and in silico modeling and simulation techniques are used to elucidate the optimal drug delivery rate and pattern. BioRAM enables (1) feasibility assessment for the dosage form, (2) development and conduct of appropriate "learning and confirming" studies, (3) transparency in decision-making, (4) assurance of drug product quality during lifecycle management, and (5) development of robust linkages between the desired clinical outcome and the necessary product quality attributes for inclusion in the quality target product profile. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Parameter identifiability-based optimal observation remedy for biological networks.
Wang, Yulin; Miao, Hongyu
2017-05-04
To systematically understand the interactions between numerous biological components, a variety of biological networks on different levels and scales have been constructed and made available in public databases or knowledge repositories. Graphical models such as structural equation models have long been used to describe biological networks for various quantitative analysis tasks, especially key biological parameter estimation. However, limited by resources or technical capacities, partial observation is a common problem in experimental observations of biological networks, and it thus becomes an important problem how to select unobserved nodes for additional measurements such that all unknown model parameters become identifiable. To the best knowledge of our authors, a solution to this problem does not exist until this study. The identifiability-based observation problem for biological networks is mathematically formulated for the first time based on linear recursive structural equation models, and then a dynamic programming strategy is developed to obtain the optimal observation strategies. The efficiency of the dynamic programming algorithm is achieved by avoiding both symbolic computation and matrix operations as used in other studies. We also provided necessary theoretical justifications to the proposed method. Finally, we verified the algorithm using synthetic network structures and illustrated the application of the proposed method in practice using a real biological network related to influenza A virus infection. The proposed approach is the first solution to the structural identifiability-based optimal observation remedy problem. It is applicable to an arbitrary directed acyclic biological network (recursive SEMs) without bidirectional edges, and it is a computerizable method. Observation remedy is an important issue in experiment design for biological networks, and we believe that this study provides a solid basis for dealing with more challenging design
Optimization of hyaluronan-based eye drop formulations.
Salzillo, Rosanna; Schiraldi, Chiara; Corsuto, Luisana; D'Agostino, Antonella; Filosa, Rosanna; De Rosa, Mario; La Gatta, Annalisa
2016-11-20
Hyaluronan (HA) is frequently incorporated in eye drops to extend the pre-corneal residence time, due to its viscosifying and mucoadhesive properties. Hydrodynamic and rheological evaluations of commercial products are first accomplished revealing molecular weights varying from about 360 to about 1200kDa and viscosity values in the range 3.7-24.2mPa s. The latter suggest that most products could be optimized towards resistance to drainage from the ocular surface. Then, a study aiming to maximize the viscosity and mucoadhesiveness of HA-based preparations is performed. The effect of polymer chain length and concentration is investigated. For the whole range of molecular weights encountered in commercial products, the concentration maximizing performance is identified. Such concentration varies from 0.3 (wt%) for a 1100kDa HA up to 1.0 (wt%) for a 250kDa HA, which is 3-fold higher than the highest concentration on the market. The viscosity and mucoadhesion profiles of optimized formulations are superior than commercial products, especially under conditions simulating in vivo blinking. Thus longer retention on the corneal epithelium can be predicted. An enhanced capacity to protect corneal porcine epithelial cells from dehydration is also demonstrated in vitro. Overall, the results predict formulations with improved efficacy. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Source mask optimization study based on latest Nikon immersion scanner
NASA Astrophysics Data System (ADS)
Zhu, Jun; Wei, Fang; Chen, Lijun; Zhang, Chenming; Zhang, Wei; Nishinaga, Hisashi; El-Sewefy, Omar; Gao, Gen-Sheng; Lafferty, Neal; Meiring, Jason; Zhang, Recoo; Zhu, Cynthia
2016-03-01
The 2x nm logic foundry node has many challenges since critical levels are pushed close to the limits of low k1 ArF water immersion lithography. For these levels, improvements in lithographic performance can translate to decreased rework and increased yield. Source Mask Optimization (SMO) is one such route to realize these image fidelity improvements. During SMO, critical layout constructs are intensively optimized in both the mask and source domain, resulting in a solution for maximum lithographic entitlement. From the hardware side, advances in source technology have enabled free-form illumination. The approach allows highly customized illumination, enabling the practical application of SMO sources. The customized illumination sources can be adjusted for maximum versatility. In this paper, we present a study on a critical layer of an advanced foundry logic node using the latest ILT based SMO software, paired with state-of-the-art scanner hardware and intelligent illuminator. Performance of the layer's existing POR source is compared with the ideal SMO result and the installed source as realized on the intelligent illuminator of an NSR-S630D scanner. Both simulation and on-silicon measurements are used to confirm that the performance of the studied layer meets established specifications.
Vanpool trip planning based on evolutionary multiple objective optimization
NASA Astrophysics Data System (ADS)
Zhao, Ming; Yang, Disheng; Feng, Shibing; Liu, Hengchang
2017-08-01
Carpool and vanpool draw a lot of researchers’ attention, which is the emphasis of this paper. A concrete vanpool operation definition is given, based on the given definition, this paper tackles vanpool operation optimization using user experience decline index(UEDI). This paper is focused on making each user having identical UEDI and the system having minimum sum of all users’ UEDI. Three contributions are made, the first contribution is a vanpool operation scheme diagram, each component of the scheme is explained in detail. The second contribution is getting all customer’s UEDI as a set, standard deviation and sum of all users’ UEDI set are used as objectives in multiple objective optimization to decide trip start address, trip start time and trip destination address. The third contribution is a trip planning algorithm, which tries to minimize the sum of all users’ UEDI. Geographical distribution of the charging stations and utilization rate of the charging stations are considered in the trip planning process.
CFD-Based Design Optimization for Single Element Rocket Injector
NASA Technical Reports Server (NTRS)
Vaidyanathan, Rajkumar; Tucker, Kevin; Papila, Nilay; Shyy, Wei
2003-01-01
To develop future Reusable Launch Vehicle concepts, we have conducted design optimization for a single element rocket injector, with overall goals of improving reliability and performance while reducing cost. Computational solutions based on the Navier-Stokes equations, finite rate chemistry, and the k-E turbulence closure are generated with design of experiment techniques, and the response surface method is employed as the optimization tool. The design considerations are guided by four design objectives motivated by the consideration in both performance and life, namely, the maximum temperature on the oxidizer post tip, the maximum temperature on the injector face, the adiabatic wall temperature, and the length of the combustion zone. Four design variables are selected, namely, H2 flow angle, H2 and O2 flow areas with fixed flow rates, and O2 post tip thickness. In addition to establishing optimum designs by varying emphasis on the individual objectives, better insight into the interplay between design variables and their impact on the design objectives is gained. The investigation indicates that improvement in performance or life comes at the cost of the other. Best compromise is obtained when improvements in both performance and life are given equal importance.
Optimization-based mesh correction with volume and convexity constraints
D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...
2016-02-24
In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimizationmore » problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less
An optimal transportation approach for nuclear structure-based pathology
Wang, Wei; Ozolek, John A.; Slepčev, Dejan; Lee, Ann B.; Chen, Cheng; Rohde, Gustavo K.
2012-01-01
Nuclear morphology and structure as visualized from histopathology microscopy images can yield important diagnostic clues in some benign and malignant tissue lesions. Precise quantitative information about nuclear structure and morphology, however, is currently not available for many diagnostic challenges. This is due, in part, to the lack of methods to quantify these differences from image data. We describe a method to characterize and contrast the distribution of nuclear structure in different tissue classes (normal, benign, cancer, etc.). The approach is based on quantifying chromatin morphology in different groups of cells using the optimal transportation (Kantorovich-Wasserstein) metric in combination with the Fisher discriminant analysis and multidimensional scaling techniques. We show that the optimal transportation metric is able to measure relevant biological information as it enables automatic determination of the class (e.g. normal vs. cancer) of a set of nuclei. We show that the classification accuracies obtained using this metric are, on average, as good or better than those obtained utilizing a set of previously described numerical features. We apply our methods to two diagnostic challenges for surgical pathology: one in the liver and one in the thyroid. Results automatically computed using this technique show potentially biologically relevant differences in nuclear structure in liver and thyroid cancers. PMID:20977984
Tree-Based Visualization and Optimization for Image Collection.
Han, Xintong; Zhang, Chongyang; Lin, Weiyao; Xu, Mingliang; Sheng, Bin; Mei, Tao
2016-06-01
The visualization of an image collection is the process of displaying a collection of images on a screen under some specific layout requirements. This paper focuses on an important problem that is not well addressed by the previous methods: visualizing image collections into arbitrary layout shapes while arranging images according to user-defined semantic or visual correlations (e.g., color or object category). To this end, we first propose a property-based tree construction scheme to organize images of a collection into a tree structure according to user-defined properties. In this way, images can be adaptively placed with the desired semantic or visual correlations in the final visualization layout. Then, we design a two-step visualization optimization scheme to further optimize image layouts. As a result, multiple layout effects including layout shape and image overlap ratio can be effectively controlled to guarantee a satisfactory visualization. Finally, we also propose a tree-transfer scheme such that visualization layouts can be adaptively changed when users select different "images of interest." We demonstrate the effectiveness of our proposed approach through the comparisons with state-of-the-art visualization techniques.
[Optimal allocation of irrigation water resources based on systematical strategy].
Cheng, Shuai; Zhang, Shu-qing
2015-01-01
With the development of the society and economy, as well as the rapid increase of population, more and more water is needed by human, which intensified the shortage of water resources. The scarcity of water resources and growing competition of water in different water use sectors reduce water availability for irrigation, so it is significant to plan and manage irrigation water resources scientifically and reasonably for improving water use efficiency (WUE) and ensuring food security. Many investigations indicate that WUE can be increased by optimization of water use. However, present studies focused primarily on a particular aspect or scale, which lack systematic analysis on the problem of irrigation water allocation. By summarizing previous related studies, especially those based on intelligent algorithms, this article proposed a multi-level, multi-scale framework for allocating irrigation water, and illustrated the basic theory of each component of the framework. Systematical strategy of optimal irrigation water allocation can not only control the total volume of irrigation water on the time scale, but also reduce water loss on the spatial scale. It could provide scientific basis and technical support for improving the irrigation water management level and ensuring the food security.
NASA Astrophysics Data System (ADS)
Earle, T. C.; Lindell, M. K.; Rankin, W. L.; Nealey, S. M.
1981-07-01
Public acceptance of radioactive waste management alternatives depends in part on public perception of the associated risks. Three aspects of those perceived risks were explored: (1) synthetic measures of risk perception based on judgments of probability and consequences; (2) acceptability of hypothetical radioactive waste policies; and (3) effects of human values on risk perception. Both the work on synthetic measures of risk perception and on the acceptability of hypothetical policies included investigations of three categories of risk: short term public risk (affecting persons living when the wastes are created), long term public risk (affecting persons living after the time the wastes were created), and occupational risk (affecting persons working with the radioactive wastes). The human values work related to public risk perception in general, across categories of persons affected. Respondents were selected according to a purposive sampling strategy.
Earle, T.C.; Lindell, M.K.; Rankin, W.L.
1981-07-01
Public acceptance of radioactive waste management alternatives depends in part on public perception of the associated risks. Three aspects of those perceived risks were explored in this study: (1) synthetic measures of risk perception based on judgments of probability and consequences; (2) acceptability of hypothetical radioactive waste policies, and (3) effects of human values on risk perception. Both the work on synthetic measures of risk perception and on the acceptability of hypothetical policies included investigations of three categories of risk: (1) Short-term public risk (affecting persons living when the wastes are created), (2) Long-term public risk (affecting persons living after the time the wastes were created), and (3) Occupational risk (affecting persons working with the radioactive wastes). The human values work related to public risk perception in general, across categories of persons affected. Respondents were selected according to a purposive sampling strategy.
Reliability-based robust design optimization of vehicle components, Part I: Theory
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2015-06-01
The reliability-based design optimization, the reliability sensitivity analysis and robust design method are employed to present a practical and effective approach for reliability-based robust design optimization of vehicle components. A procedure for reliability-based robust design optimization of vehicle components is proposed. Application of the method is illustrated by reliability-based robust design optimization of axle and spring. Numerical results have shown that the proposed method can be trusted to perform reliability-based robust design optimization of vehicle components.
Communication: Optimal parameters for basin-hopping global optimization based on Tsallis statistics
Shang, C. Wales, D. J.
2014-08-21
A fundamental problem associated with global optimization is the large free energy barrier for the corresponding solid-solid phase transitions for systems with multi-funnel energy landscapes. To address this issue we consider the Tsallis weight instead of the Boltzmann weight to define the acceptance ratio for basin-hopping global optimization. Benchmarks for atomic clusters show that using the optimal Tsallis weight can improve the efficiency by roughly a factor of two. We present a theory that connects the optimal parameters for the Tsallis weighting, and demonstrate that the predictions are verified for each of the test cases.
NASA Astrophysics Data System (ADS)
Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat
2017-03-01
Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed
Cooperation under predation risk: a data-based ESS analysis
Parker, G. A.; Milinski, M.
1997-01-01
Two fish that jointly approach a predator in order to inspect it share the deadly risk of capture depending on the distance between them. Models are developed that seek ESS inspection distances of both single prey and pairs, based on experimental data of the risk that prey (sticklebacks) incur when they approach a predator (pike) to varying distances. Our analysis suggests that an optimal inspection distance can exist for a single fish, and for two equal fish behaving entirely cooperatively so as to maximize the fitness of the pair. Two equal fish inspecting cooperatively should inspect at an equal distance from the predator. The optimal distance is much closer to the predator for cooperative pairs than for single inspectors. However, optimal inspection for two equal fish behaving cooperatively operates across a rather narrow band of conditions relating to the benefits of cooperation. Evolutionarily stable inspection can also exist for two equal fish behaving non-cooperatively such that each acts to make a best reply (in terms of its personal fitness) to its opponent's strategy. Non-cooperative pairs should also inspect at equal distance from the pike. Unlike the 'single fish' and 'cooperative' optima, which are unique inspection distances, there exists a range of ESS inspection distances. If either fish chooses to move to any point in this zone, the best reply of its opponent is to match it (move exactly alongside). Unilateral forward movement in the 'match zone' may not be possible without some cooperation, but if the pair can 'agree' to move forward synchronously, maintaining equal distance, inspection will occur at the nearest point in this zone to the predator. This 'near threshold' is an ESS and is closer to the predator than the single fish optimum: pairs behaving almost selfishly can thus attain greater benefits from inspection by the protection gained from Hamilton's dilution effect. That pairs should inspect more closely than single fish conforms with
Vehicle Shield Optimization and Risk Assessment for Future Human Space Missions
NASA Technical Reports Server (NTRS)
Nounu, Hatem N.; Kim, Myung-Hee; Cucinotta, Francis A.
2011-01-01
As the focus of future human space missions shifts to destinations beyond low Earth orbit such as Near Earth Objects (NEO), the moon, or Mars, risks associated with extended stay in hostile radiation environment need to be well understood and assessed. Since future spacecrafts designs and shapes are evolving continuous assessments of shielding and radiation risks are needed. In this study, we use a predictive software capability that calculates risks to humans inside a spacecraft prototype that builds on previous designs. The software uses CAD software Pro/Engineer and Fishbowl tool kit to quantify radiation shielding provided by the spacecraft geometry by calculating the areal density seen at a certain point, dose point, inside the spacecraft. Shielding results are used by NASA-developed software, BRYNTRN, to quantify organ doses received in a human body located in the vehicle in case of solar particle event (SPE) during such prolonged space missions. Organ doses are used to quantify risks on astronauts health and life using NASA Space Cancer Model. The software can also locate shielding weak points-hotspots-on the spacecraft s outer surface. This capability is used to reinforce weak areas in the design. Results of shielding optimization and risk calculation on an exploration vehicle design for missions of 6 months and 30 months are provided in this study. Vehicle capsule is made of aluminum shell that includes main cabin and airlock. The capsule contains 5 sets of racks that surround working and living areas. Water shelter is provided in the main cabin of the vehicle to enhance shielding in case of SPE.
Density-Based Penalty Parameter Optimization on C-SVM
Liu, Yun; Lian, Jie; Bartolacci, Michael R.; Zeng, Qing-An
2014-01-01
The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall. PMID:25114978
The overlay performance optimization based on overlay manager system
NASA Astrophysics Data System (ADS)
Sun, G.; Zhu, J.; Li, S. X.; Mao, F. L.; Duan, L. F.
2012-03-01
Based on the in-line metrology sampling and modeling, the Advanced Process Control (APC) system has been widely used to control the combined effects of process errors. With the shrinking of overlay budgets, the automated optimized overlay management system has already been necessary. To further improve the overlay performance of SMEE SSA600/10A exposure system, the overlay manager system (OMS) is introduced. The Unilith software package developed by SMEE included in the OMS is used for the decomposition and analysis of sampled data. Several kinds of correction methods integrated in the OMS have been designed and have demonstrated effective results in automated overlay control. To balance the overlay performance and the metrology time, the exponential weighting method for sampling is also considered.
k-Nearest neighbors optimization-based outlier removal.
Yosipof, Abraham; Senderowitz, Hanoch
2015-03-30
Datasets of molecular compounds often contain outliers, that is, compounds which are different from the rest of the dataset. Outliers, while often interesting may affect data interpretation, model generation, and decisions making, and therefore, should be removed from the dataset prior to modeling efforts. Here, we describe a new method for the iterative identification and removal of outliers based on a k-nearest neighbors optimization algorithm. We demonstrate for three different datasets that the removal of outliers using the new algorithm provides filtered datasets which are better than those provided by four alternative outlier removal procedures as well as by random compound removal in two important aspects: (1) they better maintain the diversity of the parent datasets; (2) they give rise to quantitative structure activity relationship (QSAR) models with much better prediction statistics. The new algorithm is, therefore, suitable for the pretreatment of datasets prior to QSAR modeling.
Task scheduling based on ant colony optimization in cloud environment
NASA Astrophysics Data System (ADS)
Guo, Qiang
2017-04-01
In order to optimize the task scheduling strategy in cloud environment, we propose a cloud computing task scheduling algorithm based on ant colony algorithm. The main goal of this algorithm is to minimize the makespan and the total cost of the tasks, while making the system load more balanced. In this paper, we establish the objective function of the makespan and costs of the tasks, define the load balance function. Meanwhile, we also improve the initialization of the pheromone, the heuristic function and the pheromone update method in the ant colony algorithm. Then, some experiments were carried out on the Cloudsim platform, and the results were compared with algorithms of ACO and Min-Min. The results shows that the algorithm is more efficient than the other two algorithms in makespan, costs and system load balancing.
Optimization-based interactive segmentation interface for multiregion problems.
Baxter, John S H; Rajchl, Martin; Peters, Terry M; Chen, Elvis C S
2016-04-01
Interactive segmentation is becoming of increasing interest to the medical imaging community in that it combines the positive aspects of both manual and automated segmentation. However, general-purpose tools have been lacking in terms of segmenting multiple regions simultaneously with a high degree of coupling between groups of labels. Hierarchical max-flow segmentation has taken advantage of this coupling for individual applications, but until recently, these algorithms were constrained to a particular hierarchy and could not be considered general-purpose. In a generalized form, the hierarchy for any given segmentation problem is specified in run-time, allowing different hierarchies to be quickly explored. We present an interactive segmentation interface, which uses generalized hierarchical max-flow for optimization-based multiregion segmentation guided by user-defined seeds. Applications in cardiac and neonatal brain segmentation are given as example applications of its generality.
Plasmonic nanoantenna design and fabrication based on evolutionary optimization.
Feichtner, Thorsten; Selig, Oleg; Hecht, Bert
2017-05-15
Nanoantennas can tailor light-matter interaction for optical communication, sensing, and spectroscopy. Their design is inspired by radio-frequency rules which partly break down at optical frequencies. Here we find unexpected nanoantenna designs exhibiting strong light localization and enhancement by using a general and scalable evolutionary algorithm based on FDTD simulations that also accounts for geometrical fabrication constraints. The resulting nanoantennas are "printed" directly by focused-ion beam milling and their fitness ranking is validated experimentally by two-photon photoluminescence. We find the best antennas' operation principle deviating from that of classical radio wave-inspired designs. Our work sets the stage for a widespread application of evolutionary optimization in nano photonics.
Efficacy of Code Optimization on Cache-Based Processors
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.
Patch-based near-optimal image denoising.
Chatterjee, Priyam; Milanfar, Peyman
2012-04-01
In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively.
An optimization-based iterative algorithm for recovering fluorophore location
NASA Astrophysics Data System (ADS)
Yi, Huangjian; Peng, Jinye; Jin, Chen; He, Xiaowei
2015-10-01
Fluorescence molecular tomography (FMT) is a non-invasive technique that allows three-dimensional visualization of fluorophore in vivo in small animals. In practical applications of FMT, however, there are challenges in the image reconstruction since it is a highly ill-posed problem due to the diffusive behaviour of light transportation in tissue and the limited measurement data. In this paper, we presented an iterative algorithm based on an optimization problem for three dimensional reconstruction of fluorescent target. This method alternates weighted algebraic reconstruction technique (WART) with steepest descent method (SDM) for image reconstruction. Numerical simulations experiments and physical phantom experiment are performed to validate our method. Furthermore, compared to conjugate gradient method, the proposed method provides a better three-dimensional (3D) localization of fluorescent target.
Density-based penalty parameter optimization on C-SVM.
Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An
2014-01-01
The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.
Optimizing timing performance of silicon photomultiplier-based scintillation detectors
Yeom, Jung Yeol; Vinke, Ruud
2013-01-01
Precise timing resolution is crucial for applications requiring photon time-of-flight (ToF) information such as ToF positron emission tomography (PET). Silicon photomultipliers (SiPM) for PET, with their high output capacitance, are known to require custom preamplifiers to optimize timing performance. In this paper, we describe simple alternative front-end electronics based on a commercial low-noise RF preamplifier and methods that have been implemented to achieve excellent timing resolution. Two radiation detectors with L(Y)SO scintillators coupled to Hamamatsu SiPMs (MPPC S10362–33-050C) and front-end electronics based on an RF amplifier (MAR-3SM+), typically used for wireless applications that require minimal additional circuitry, have been fabricated. These detectors were used to detect annihilation photons from a Ge-68 source and the output signals were subsequently digitized by a high speed oscilloscope for offline processing. A coincident resolving time (CRT) of 147 ± 3 ps FWHM and 186 ± 3 ps FWHM with 3 × 3 × 5 mm3 and with 3 × 3 × 20 mm3 LYSO crystal elements were measured, respectively. With smaller 2 × 2 × 3 mm3 LSO crystals, a CRT of 125 ± 2 ps FWHM was achieved with slight improvement to 121 ± 3 ps at a lower temperature (15°C). Finally, with the 20 mm length crystals, a degradation of timing resolution was observed for annihilation photon interactions that occur close to the photosensor compared to shallow depth-of-interaction (DOI). We conclude that commercial RF amplifiers optimized for noise, besides their ease of use, can produce excellent timing resolution comparable to best reported values acquired with custom readout electronics. On the other hand, as timing performance degrades with increasing photon DOI, a head-on detector configuration will produce better CRT than a side-irradiated setup for longer crystals. PMID:23369872
Fayek, H M; Elamvazuthi, I; Perumal, N; Venkatesh, B
2014-09-01
A computationally-efficient systematic procedure to design an Optimal Type-2 Fuzzy Logic Controller (OT2FLC) is proposed. The main scheme is to optimize the gains of the controller using Particle Swarm Optimization (PSO), then optimize only two parameters per type-2 membership function using Genetic Algorithm (GA). The proposed OT2FLC was implemented in real-time to control the position of a DC servomotor, which is part of a robotic arm. The performance judgments were carried out based on the Integral Absolute Error (IAE), as well as the computational cost. Various type-2 defuzzification methods were investigated in real-time. A comparative analysis with an Optimal Type-1 Fuzzy Logic Controller (OT1FLC) and a PI controller, demonstrated OT2FLC׳s superiority; which is evident in handling uncertainty and imprecision induced in the system by means of noise and disturbances. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network
NASA Astrophysics Data System (ADS)
Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.
A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the
Pursell, M.J.; Sehnan, C.; Naen, M.F.
1999-11-01
Successful and cost effective Corrosion Risk Assessment depends on a sensible use of prediction methods and good understanding of process factors. Both are discussed with examples. Practice semi-probabilistic Risk Based Inspection planning methods that measure risk directly as cost and personnel hazard are compared with traditional methods and discussed.
12 CFR 390.466 - Risk-based capital credit risk-weight categories.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Risk-based capital credit risk-weight categories. 390.466 Section 390.466 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND... that the FDIC determines have the same risk characteristics as foreclosed real estate by the State...
12 CFR 390.466 - Risk-based capital credit risk-weight categories.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Risk-based capital credit risk-weight categories. 390.466 Section 390.466 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND... that the FDIC determines have the same risk characteristics as foreclosed real estate by the State...
12 CFR 167.6 - Risk-based capital credit risk-weight categories.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Risk-based capital credit risk-weight categories. 167.6 Section 167.6 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY...) Equity investments that the OCC determines have the same risk characteristics as foreclosed real estate...
Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices
Ali, Abdelrahman; Siddharth, Siddharth; Syed, Zainab; El-Sheimy, Naser
2012-01-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO)-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs) when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS) applications.
Parallel performance optimizations on unstructured mesh-based simulations
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; ...
2015-06-01
This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
Development and optimization of biofilm based algal cultivation
NASA Astrophysics Data System (ADS)
Gross, Martin Anthony
This dissertation describes research done on biofilm based algal cultivation systems. The system that was developed in this work is the revolving algal biofilm cultivation system (RAB). A raceway-retrofit, and a trough-based pilot-scale RAB system were developed and investigated. Each of the systems significantly outperformed a control raceway pond in side-by-side tests. Furthermore the RAB system was found to require significantly less water than the raceway pond based cultivation system. Lastly a TEA/LCA analysis was conducted to evaluate the economic and life cycle of the RAB cultivation system in comparison to raceway pond. It was found that the RAB system was able to grow algae at a lower cost and was shown to be profitable at a smaller scale than the raceway pond style of algal cultivation. Additionally the RAB system was projected to have lower GHG emissions, and better energy and water use efficiencies in comparison to a raceway pond system. Furthermore, fundamental research was conducted to identify the optimal material for algae to attach on. A total of 28 materials with a smooth surface were tested for initial cell colonization and it was found that the tetradecane contact angle of the materials had a good correlation with cell attachment. The effects of surface texture were evaluated using mesh materials (nylon, polypropylene, high density polyethylene, polyester, aluminum, and stainless steel) with openings ranging from 0.05--6.40 mm. It was found that both surface texture and material composition influence algal attachment.
Biological Based Risk Assessment for Space Exploration
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.
2011-01-01
Exposures from galactic cosmic rays (GCR) - made up of high-energy protons and high-energy and charge (HZE) nuclei, and solar particle events (SPEs) - comprised largely of low- to medium-energy protons are the primary health concern for astronauts for long-term space missions. Experimental studies have shown that HZE nuclei produce both qualitative and quantitative differences in biological effects compared to terrestrial radiation, making risk assessments for cancer and degenerative risks, such as central nervous system effects and heart disease, highly uncertain. The goal for space radiation protection at NASA is to be able to reduce the uncertainties in risk assessments for Mars exploration to be small enough to ensure acceptable levels of risks are not exceeded and to adequately assess the efficacy of mitigation measures such as shielding or biological countermeasures. We review the recent BEIR VII and UNSCEAR-2006 models of cancer risks and their uncertainties. These models are shown to have an inherent 2-fold uncertainty as defined by ratio of the 95% percent confidence level to the mean projection, even before radiation quality is considered. In order to overcome the uncertainties in these models, new approaches to risk assessment are warranted. We consider new computational biology approaches to modeling cancer risks. A basic program of research that includes stochastic descriptions of the physics and chemistry of radiation tracks and biochemistry of metabolic pathways, to emerging biological understanding of cellular and tissue modifications leading to cancer is described.
Zhou, Ping Ping; Liu, Zhao Ping; Zhang, Lei; Liu, Ai Dong; Song, Yan; Yong, Ling; Li, Ning
2014-11-01
The method has been developed to accurately identify the magnitude of health risks and provide scientific evidence for implementation of risk management in food safety. It combines two parameters including consequence and likelihood of adverse effects based on risk matrix. Score definitions and classification for the consequence and the likelihood of adverse effects are proposed. The risk score identifies the intersection of consequence and likelihood in risk matrix represents its health risk level with different colors: 'low', 'medium', 'high'. Its use in an actual case is shown.
Dhat, Shalaka; Pund, Swati; Kokare, Chandrakant; Sharma, Pankaj; Shrivastava, Birendra
2017-01-01
Rapidly evolving technical and regulatory landscapes of the pharmaceutical product development necessitates risk management with application of multivariate analysis using Process Analytical Technology (PAT) and Quality by Design (QbD). Poorly soluble, high dose drug, Satranidazole was optimally nanoprecipitated (SAT-NP) employing principles of Formulation by Design (FbD). The potential risk factors influencing the critical quality attributes (CQA) of SAT-NP were identified using Ishikawa diagram. Plackett-Burman screening design was adopted to screen the eight critical formulation and process parameters influencing the mean particle size, zeta potential and dissolution efficiency at 30min in pH7.4 dissolution medium. Pareto charts (individual and cumulative) revealed three most critical factors influencing CQA of SAT-NP viz. aqueous stabilizer (Polyvinyl alcohol), release modifier (Eudragit® S 100) and volume of aqueous phase. The levels of these three critical formulation attributes were optimized by FbD within established design space to minimize mean particle size, poly dispersity index, and maximize encapsulation efficiency of SAT-NP. Lenth's and Bayesian analysis along with mathematical modeling of results allowed identification and quantification of critical formulation attributes significantly active on the selected CQAs. The optimized SAT-NP exhibited mean particle size; 216nm, polydispersity index; 0.250, zeta potential; -3.75mV and encapsulation efficiency; 78.3%. The product was lyophilized using mannitol to form readily redispersible powder. X-ray diffraction analysis confirmed the conversion of crystalline SAT to amorphous form. In vitro release of SAT-NP in gradually pH changing media showed <20% release in pH1.2 and pH6.8 in 5h, while, complete release (>95%) in pH7.4 in next 3h, indicative of burst release after a lag time. This investigation demonstrated effective application of risk management and QbD tools in developing site-specific release
Risk-based maintenance--techniques and applications.
Arunraj, N S; Maiti, J
2007-04-11
Plant and equipment, however well designed, will not remain safe or reliable if it is not maintained. The general objective of the maintenance process is to make use of the knowledge of failures and accidents to achieve the possible safety with the lowest possible cost. The concept of risk-based maintenance was developed to inspect the high-risk components usually with greater frequency and thoroughness and to maintain in a greater manner, to achieve tolerable risk criteria. Risk-based maintenance methodology provides a tool for maintenance planning and decision making to reduce the probability of failure of equipment and the consequences of failure. In this paper, the risk analysis and risk-based maintenance methodologies were identified and classified into suitable classes. The factors affecting the quality of risk analysis were identified and analyzed. The applications, input data and output data were studied to understand their functioning and efficiency. The review showed that there is no unique way to perform risk analysis and risk-based maintenance. The use of suitable techniques and methodologies, careful investigation during the risk analysis phase, and its detailed and structured results are necessary to make proper risk-based maintenance decisions.
Liang, Jie; Zhong, Minzhou; Zeng, Guangming; Chen, Gaojie; Hua, Shanshan; Li, Xiaodong; Yuan, Yujie; Wu, Haipeng; Gao, Xiang
2017-02-01
Land-use change has direct impact on ecosystem services and alters ecosystem services values (ESVs). Ecosystem services analysis is beneficial for land management and decisions. However, the application of ESVs for decision-making in land use decisions is scarce. In this paper, a method, integrating ESVs to balance future ecosystem-service benefit and risk, is developed to optimize investment in land for ecological conservation in land use planning. Using ecological conservation in land use planning in Changsha as an example, ESVs is regarded as the expected ecosystem-service benefit. And uncertainty of land use change is regarded as risk. This method can optimize allocation of investment in land to improve ecological benefit. The result shows that investment should be partial to Liuyang City to get higher benefit. The investment should also be shifted from Liuyang City to other regions to reduce risk. In practice, lower limit and upper limit for weight distribution, which affects optimal outcome and selection of investment allocation, should be set in investment. This method can reveal the optimal spatial allocation of investment to maximize the expected ecosystem-service benefit at a given level of risk or minimize risk at a given level of expected ecosystem-service benefit. Our results of optimal analyses highlight tradeoffs between future ecosystem-service benefit and uncertainty of land use change in land use decisions. Copyright © 2016 Elsevier B.V. All rights reserved.
Air Quality Monitoring: Risk-Based Choices
NASA Technical Reports Server (NTRS)
James, John T.
2009-01-01
Air monitoring is secondary to rigid control of risks to air quality. Air quality monitoring requires us to target the credible residual risks. Constraints on monitoring devices are severe. Must transition from archival to real-time, on-board monitoring. Must provide data to crew in a way that they can interpret findings. Dust management and monitoring may be a major concern for exploration class missions.
Modified risk graph method using fuzzy rule-based approach.
Nait-Said, R; Zidani, F; Ouzraoui, N
2009-05-30
The risk graph is one of the most popular methods used to determine the safety integrity level for safety instrumented functions. However, conventional risk graph as described in the IEC 61508 standard is subjective and suffers from an interpretation problem of risk parameters. Thus, it can lead to inconsistent outcomes that may result in conservative SILs. To overcome this difficulty, a modified risk graph using fuzzy rule-based system is proposed. This novel version of risk graph uses fuzzy scales to assess risk parameters and calibration may be made by varying risk parameter values. Furthermore, the outcomes which are numerical values of risk reduction factor (the inverse of the probability of failure on demand) can be compared directly with those given by quantitative and semi-quantitative methods such as fault tree analysis (FTA), quantitative risk assessment (QRA) and layers of protection analysis (LOPA).
Kovács, Kristóf; Antal, István; Stampf, György; Klebovich, Imre; Ludányi, Krisztina
2010-03-01
An intravenous solution is a dosage forms intended for administration into the bloodstream. This route is the most rapid and the most bioavailable method of getting drugs into systemic circulation, and therefore it is also the most liable to cause adverse effects. In order to reduce the possibility of side effects and to ensure adequate clinical dosage of the formulation, the primarily formulated composition should be optimized. It is also important that the composition should retain its therapeutic effectiveness and safety throughout the shelf-life of the product. This paper focuses on the optimization and stability testing of a parenteral solution containing miconazole and ketoconazole solubilized with a ternary solvent system as model drugs. Optimization of the solvent system was performed based on assessing the risk/benefit ratio of the composition and its properties upon dilution. Stability tests were conducted based on the EMEA (European Medicines Agency) "guideline on stability testing: stability testing of existing active substances and related finished products". Experiments show that both the amount of co-solvent and surface active agent of the solvent system could substantially be reduced, while still maintaining adequate solubilizing power. It is also shown that the choice of various containers affects the stability of the compositions. It was concluded that by assessing the risk/benefit ratio of solubilizing power versus toxicity, the concentration of excipients could be considerably decreased while still showing a powerful solubilizing effect. It was also shown that a pharmaceutically acceptable shelf-life could be assigned to the composition, indicating good long-term stability.
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638
An Optimization-Based Approach to Injector Element Design
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)
2000-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model
Efficacy of Code Optimization on Cache-based Processors
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important
Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng
2009-10-01
The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.
Nesbitt, Shawna D
2007-11-01
Treating hypertension reduces the rates of myocardial infarction, stroke, and renal disease; however, clinical trial experience suggests that monotherapy is not likely to be successful for achieving goal blood pressure (BP) levels in many hypertensive patients. In multiple recent clinical trials including various subsets of hypertensive patients, the achievement of BP goal has typically required the combination of 2 or more medications, particularly in patients with BP levels>160/100 mm Hg. When initiating combination therapy for hypertension, careful consideration must be given to the choice of medication. Clinical trial evidence has shown the efficacy of various combinations of angiotensin-converting enzyme inhibitors, angiotensin II receptor blockers, calcium channel blockers, and diuretics in reducing BP and cardiovascular risk. Ongoing trials should provide additional guidance on the optimal choice of combination regimens in specific clinical settings.
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1993-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.
Particle swarm optimization algorithm based low cost magnetometer calibration
NASA Astrophysics Data System (ADS)
Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.
2011-12-01
Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments
Task-based optimization of image reconstruction in breast CT
NASA Astrophysics Data System (ADS)
Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2014-03-01
We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.
ALADDIN: an optimized nulling ground-based demonstrator for DARWIN
NASA Astrophysics Data System (ADS)
Coudé du Foresto, V.; Absil, O.; Swain, M.; Vakili, F.; Barillot, M.
2006-06-01
The ESA Darwin space mission will require a ground based precursor to i/ demonstrate nulling interferometry in an operational context and ii/ carry out some precursor science, such as the characterization of the level of exozodiacal light around the main Darwin targets. These are the stated objectives of the GENIE nulling instrument that was studied for the VLTI. We argue here that the same objectives can be met in a more efficient way by an antarctic-based nulling experiment. The ALADDIN mission concept is an integrated L-band nulling breadboard with relatively modest collectors (1m) and baseline (40m). Because of its privileged location, this is suffcient to achieve a sensitivity (in terms of detectable zodi levels) which is 1.6 to 3.5 times better than GENIE at the VLTI, bringing it below the 20-zodi threshold value identified to carry out the Darwin precursor science. The integrated design enables top-level optimization and full access to the light collectors for the duration of the experiment, while reducing the complexity of the nulling breadboard.
Traffic Aware Planner for Cockpit-Based Trajectory Optimization
NASA Technical Reports Server (NTRS)
Woods, Sharon E.; Vivona, Robert A.; Henderson, Jeffrey; Wing, David J.; Burke, Kelly A.
2016-01-01
The Traffic Aware Planner (TAP) software application is a cockpit-based advisory tool designed to be hosted on an Electronic Flight Bag and to enable and test the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR). The TASAR concept provides pilots with optimized route changes (including altitude) that reduce fuel burn and/or flight time, avoid interactions with known traffic, weather and restricted airspace, and may be used by the pilots to request a route and/or altitude change from Air Traffic Control. Developed using an iterative process, TAP's latest improvements include human-machine interface design upgrades and added functionality based on the results of human-in-the-loop simulation experiments and flight trials. Architectural improvements have been implemented to prepare the system for operational-use trials with partner commercial airlines. Future iterations will enhance coordination with airline dispatch and add functionality to improve the acceptability of TAP-generated route-change requests to pilots, dispatchers, and air traffic controllers.
MAOA-L carriers are better at making optimal financial decisions under risk
Frydman, Cary; Camerer, Colin; Bossaerts, Peter; Rangel, Antonio
2011-01-01
Genes can affect behaviour towards risks through at least two distinct neurocomputational mechanisms: they may affect the value assigned to different risky options, or they may affect the way in which the brain adjudicates between options based on their value. We combined methods from neuroeconomics and behavioural genetics to investigate the impact that the genes encoding for monoamine oxidase-A (MAOA), the serotonin transporter (5-HTT) and the dopamine D4 receptor (DRD4) have on these two computations. Consistent with previous literature, we found that carriers of the MAOA-L polymorphism were more likely to take financial risks. Our computational choice model, rooted in established decision theory, showed that MAOA-L carriers exhibited such behaviour because they are able to make better financial decisions under risk, and not because they are more impulsive. In contrast, we found no behavioural or computational differences among the 5-HTT and DRD4 polymorphisms. PMID:21147794
Arnold, Scott M; Lynn, Tracey V; Verbrugge, Lori A; Middaugh, John P
2005-03-01
National fish consumption advisories that are based solely on assessment of risk of exposure to contaminants without consideration of consumption benefits result in overly restrictive advice that discourages eating fish even in areas where such advice is unwarranted. In fact, generic fish advisories may have adverse public health consequences because of decreased fish consumption and substitution of foods that are less healthy. Public health is on the threshold of a new era for determining actual exposures to environmental contaminants, owing to technological advances in analytical chemistry. It is now possible to target fish consumption advice to specific at-risk populations by evaluating individual contaminant exposures and health risk factors. Because of the current epidemic of nutritionally linked disease, such as obesity, diabetes, and cardiovascular disease, general recommendations for limiting fish consumption are ill conceived and potentially dangerous.
Cost-Effectiveness and Harm-Benefit Analyses of Risk-Based Screening Strategies for Breast Cancer
Carles, Misericordia; Sala, Maria; Pla, Roger; Castells, Xavier; Domingo, Laia; Rue, Montserrat
2014-01-01
The one-size-fits-all paradigm in organized screening of breast cancer is shifti