Science.gov

Sample records for risk based optimization

  1. PTV-based IMPT optimization incorporating planning risk volumes vs robust optimization

    SciTech Connect

    Liu Wei; Li Xiaoqiang; Zhu, Ron. X.; Mohan, Radhe; Frank, Steven J.; Li Yupeng

    2013-02-15

    Purpose: Robust optimization leads to intensity-modulated proton therapy (IMPT) plans that are less sensitive to uncertainties and superior in terms of organs-at-risk (OARs) sparing, target dose coverage, and homogeneity compared to planning target volume (PTV)-based optimized plans. Robust optimization incorporates setup and range uncertainties, which implicitly adds margins to both targets and OARs and is also able to compensate for perturbations in dose distributions within targets and OARs caused by uncertainties. In contrast, the traditional PTV-based optimization considers only setup uncertainties and adds a margin only to targets but no margins to the OARs. It also ignores range uncertainty. The purpose of this work is to determine if robustly optimized plans are superior to PTV-based plans simply because the latter do not assign margins to OARs during optimization. Methods: The authors retrospectively selected from their institutional database five patients with head and neck (H and N) cancer and one with prostate cancer for this analysis. Using their original images and prescriptions, the authors created new IMPT plans using three methods: PTV-based optimization, optimization based on the PTV and planning risk volumes (PRVs) (i.e., 'PTV+PRV-based optimization'), and robust optimization using the 'worst-case' dose distribution. The PRVs were generated by uniformly expanding OARs by 3 mm for the H and N cases and 5 mm for the prostate case. The dose-volume histograms (DVHs) from the worst-case dose distributions were used to assess and compare plan quality. Families of DVHs for each uncertainty for all structures of interest were plotted along with the nominal DVHs. The width of the 'bands' of DVHs was used to quantify the plan sensitivity to uncertainty. Results: Compared with conventional PTV-based and PTV+PRV-based planning, robust optimization led to a smaller bandwidth for the targets in the face of uncertainties {l_brace}clinical target volume [CTV

  2. Rational risk-based decision support for drinking water well managers by optimized monitoring designs

    NASA Astrophysics Data System (ADS)

    Enzenhöfer, R.; Geiges, A.; Nowak, W.

    2011-12-01

    Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill

  3. A Risk-Based Multi-Objective Optimization Concept for Early-Warning Monitoring Networks

    NASA Astrophysics Data System (ADS)

    Bode, F.; Loschko, M.; Nowak, W.

    2014-12-01

    Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources which cannot be eliminated, especially in urban regions. As matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs.In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations and the early warning time and to minimize the installation and operating costs of the monitoring network. A qualitative risk ranking is used to prioritize the known risk sources for monitoring. The unknown risk sources can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well.We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks which are valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrade) to also cover moderate, tolerable and unknown risk sources. Monitoring networks which are valid for the remaining risk also cover all other risk sources but the early-warning time suffers.The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. To avoid numerical dispersion during the transport simulations we use the

  4. A risk explicit interval linear programming model for uncertainty-based environmental economic optimization in the Lake Fuxian watershed, China.

    PubMed

    Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144

  5. A Risk Explicit Interval Linear Programming Model for Uncertainty-Based Environmental Economic Optimization in the Lake Fuxian Watershed, China

    PubMed Central

    Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144

  6. Risk based approach for design and optimization of stomach specific delivery of rifampicin.

    PubMed

    Vora, Chintan; Patadia, Riddhish; Mittal, Karan; Mashru, Rajashree

    2013-10-15

    The research envisaged focuses on risk management approach for better recognizing the risks, ways to mitigate them and propose a control strategy for the development of rifampicin gastroretentive tablets. Risk assessment using failure mode and effects analysis (FMEA) was done to depict the effects of specific failure modes related to respective formulation/process variable. A Box-Behnken design was used to investigate the effect of amount of sodium bicarbonate (X1), pore former HPMC (X2) and glyceryl behenate (X3) on percent drug release at 1st hour (Q1), 4th hour (Q4), 8th hour (Q8) and floating lag time (min). Main effects and interaction plots were generated to study effects of variables. Selection of the optimized formulation was done using desirability function and overlay contour plots. The optimized formulation exhibited Q1 of 20.9%, Q4 of 59.1%, Q8 of 94.8% and floating lag time of 4.0 min. Akaike information criteria and Model selection criteria revealed that the model was best described by Korsmeyer-Peppas power law. The residual plots demonstrated no existence of non-normality, skewness or outliers. The composite desirability for optimized formulation computed using equations and software were 0.84 and 0.86 respectively. FTIR, DSC and PXRD studies ruled out drug polymer interaction due to thermal treatment. PMID:23916823

  7. A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem

    PubMed Central

    Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming

    2015-01-01

    Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842

  8. A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.

    PubMed

    Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming

    2015-01-01

    Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity. PMID:26180842

  9. An optimization-based approach for facility energy management with uncertainties, and, Power portfolio optimization in deregulated electricity markets with risk management

    NASA Astrophysics Data System (ADS)

    Xu, Jun

    Topic 1. An Optimization-Based Approach for Facility Energy Management with Uncertainties. Effective energy management for facilities is becoming increasingly important in view of the rising energy costs, the government mandate on the reduction of energy consumption, and the human comfort requirements. This part of dissertation presents a daily energy management formulation and the corresponding solution methodology for HVAC systems. The problem is to minimize the energy and demand costs through the control of HVAC units while satisfying human comfort, system dynamics, load limit constraints, and other requirements. The problem is difficult in view of the fact that the system is nonlinear, time-varying, building-dependent, and uncertain; and that the direct control of a large number of HVAC components is difficult. In this work, HVAC setpoints are the control variables developed on top of a Direct Digital Control (DDC) system. A method that combines Lagrangian relaxation, neural networks, stochastic dynamic programming, and heuristics is developed to predict the system dynamics and uncontrollable load, and to optimize the setpoints. Numerical testing and prototype implementation results show that our method can effectively reduce total costs, manage uncertainties, and shed the load, is computationally efficient. Furthermore, it is significantly better than existing methods. Topic 2. Power Portfolio Optimization in Deregulated Electricity Markets with Risk Management. In a deregulated electric power system, multiple markets of different time scales exist with various power supply instruments. A load serving entity (LSE) has multiple choices from these instruments to meet its load obligations. In view of the large amount of power involved, the complex market structure, risks in such volatile markets, stringent constraints to be satisfied, and the long time horizon, a power portfolio optimization problem is of critical importance but difficulty for an LSE to serve the

  10. Risk modelling in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-09-01

    Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.

  11. Application of risk-based methods to optimize inspection planning for regulatory activities at nuclear power plants

    SciTech Connect

    Wong, S.M.; Higgins, J.C.; Martinez-Guridi, G.

    1995-07-01

    As part of regulatory oversight requirements, the U.S. Nuclear Regulatory Commission (USNRC) staff conducts inspection activities to assess operational safety performance in nuclear power plants. Currently, guidance in these inspections is provided by procedures in the NRC Inspection Manual and issuance of Temporary Instructions defining the objectives and scope of the inspection effort. In several studies sponsored by the USNRC over the last few years, Brookhaven National Laboratory (BNL) has developed and applied methodologies for providing risk-based inspection guidance for the safety assessments of nuclear power plant systems. One recent methodology integrates insights from existing Probabilistic Risk Assessment (PRA) studies and Individual Plant Evaluations (TPE) with information from operating experience reviews for consideration in inspection planning for either multi-disciplinary team inspections or individual inspections. In recent studies at BNL, a risk-based methodology was developed to optimize inspection planning for regulatory activities at nuclear power plants. This methodology integrates risk-based insights from the plant configuration risk profile and risk information found in existing PRA/IPE studies.

  12. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  13. Dynamic optimization of ISR sensors using a risk-based reward function applied to ground and space surveillance scenarios

    NASA Astrophysics Data System (ADS)

    DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.

    2012-06-01

    . The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object characterization, jointly, in computing reward and cost for optimizing tasking decisions.

  14. Optimization of the fractionated irradiation scheme considering physical doses to tumor and organ at risk based on dose–volume histograms

    SciTech Connect

    Sugano, Yasutaka; Mizuta, Masahiro; Takao, Seishin; Shirato, Hiroki; Sutherland, Kenneth L.; Date, Hiroyuki

    2015-11-15

    Purpose: Radiotherapy of solid tumors has been performed with various fractionation regimens such as multi- and hypofractionations. However, the ability to optimize the fractionation regimen considering the physical dose distribution remains insufficient. This study aims to optimize the fractionation regimen, in which the authors propose a graphical method for selecting the optimal number of fractions (n) and dose per fraction (d) based on dose–volume histograms for tumor and normal tissues of organs around the tumor. Methods: Modified linear-quadratic models were employed to estimate the radiation effects on the tumor and an organ at risk (OAR), where the repopulation of the tumor cells and the linearity of the dose-response curve in the high dose range of the surviving fraction were considered. The minimization problem for the damage effect on the OAR was solved under the constraint that the radiation effect on the tumor is fixed by a graphical method. Here, the damage effect on the OAR was estimated based on the dose–volume histogram. Results: It was found that the optimization of fractionation scheme incorporating the dose–volume histogram is possible by employing appropriate cell surviving models. The graphical method considering the repopulation of tumor cells and a rectilinear response in the high dose range enables them to derive the optimal number of fractions and dose per fraction. For example, in the treatment of prostate cancer, the optimal fractionation was suggested to lie in the range of 8–32 fractions with a daily dose of 2.2–6.3 Gy. Conclusions: It is possible to optimize the number of fractions and dose per fraction based on the physical dose distribution (i.e., dose–volume histogram) by the graphical method considering the effects on tumor and OARs around the tumor. This method may stipulate a new guideline to optimize the fractionation regimen for physics-guided fractionation.

  15. Risk Analysis for Resource Planning Optimization

    NASA Technical Reports Server (NTRS)

    Chueng, Kar-Ming

    2008-01-01

    The main purpose of this paper is to introduce a risk management approach that allows planners to quantify the risk and efficiency tradeoff in the presence of uncertainties, and to make forward-looking choices in the development and execution of the plan. Demonstrate a planning and risk analysis framework that tightly integrates mathematical optimization, empirical simulation, and theoretical analysis techniques to solve complex problems.

  16. Risk Assessment: Evidence Base

    NASA Technical Reports Server (NTRS)

    Johnson-Throop, Kathy A.

    2007-01-01

    Human systems PRA (Probabilistic Risk Assessment: a) Provides quantitative measures of probability, consequence, and uncertainty; and b) Communicates risk and informs decision-making. Human health risks rated highest in ISS PRA are based on 1997 assessment of clinical events in analog operational settings. Much work remains to analyze remaining human health risks identified in Bioastronautics Roadmap.

  17. RNA based evolutionary optimization

    NASA Astrophysics Data System (ADS)

    Schuster, Peter

    1993-12-01

    . Evolutionary optimization of two-letter sequences in thus more difficult than optimization in the world of natural RNA sequences with four bases. This fact might explain the usage of four bases in the genetic language of nature. Finally we study the mapping from RNA sequences into secondary structures and explore the topology of RNA shape space. We find that ‘neutral paths’ connecting neighbouring sequences with identical structures go very frequently through entire sequence space. Sequences folding into common structures are found everywhere in sequence space. Hence, evolution can migrate to almost every part of sequence space without ‘hill climbing’ and only small fractions of the entire number of sequences have to be searched in order to find suitable structures.

  18. Potential for dose-escalation and reduction of risk in pancreatic cancer using IMRT optimization with lexicographic ordering and gEUD-based cost functions.

    PubMed

    Spalding, Aaron C; Jee, Kyung-Wook; Vineberg, Karen; Jablonowski, Marla; Fraass, Benedick A; Pan, Charlie C; Lawrence, Theodore S; Haken, Randall K Ten; Ben-Josef, Edgar

    2007-02-01

    Radiotherapy for pancreatic cancer is limited by the tolerance of local organs at risk (OARs) and frequent overlap of the planning target volume (PTV) and OAR volumes. Using lexicographic ordering (LO), a hierarchical optimization technique, with generalized equivalent uniform dose (gEUD) cost functions, we studied the potential of intensity modulated radiation therapy (IMRT) to increase the dose to pancreatic tumors and to areas of vascular involvement that preclude surgical resection [surgical boost volume (SBV)]. We compared 15 forward planned three-dimensional conformal (3DCRT) and IMRT treatment plans for locally advanced unresectable pancreatic cancer. We created IMRT plans optimized using LO with gEUD-based cost functions that account for the contribution of each part of the resulting inhomogeneous dose distribution. LO-IMRT plans allowed substantial PTV dose escalation compared with 3DCRT; median increase from 52 Gy to 66 Gy (a=-5,p<0.005) and median increase from 50 Gy to 59 Gy (a=-15,p<0.005). LO-IMRT also allowed increases to 85 Gy in the SBV, regardless of a value, along with significant dose reductions in OARs. We conclude that LO-IMRT with gEUD cost functions could allow dose escalation in pancreas tumors with concomitant reduction in doses to organs at risk as compared with traditional 3DCRT. PMID:17388169

  19. Potential for dose-escalation and reduction of risk in pancreatic cancer using IMRT optimization with lexicographic ordering and gEUD-based cost functions

    SciTech Connect

    Spalding, Aaron C.; Jee, Kyung-Wook; Vineberg, Karen; Jablonowski, Marla; Fraass, Benedick A.; Pan, Charlie C.; Lawrence, Theodore S.; Ten Haken, Randall K.; Ben-Josef, Edgar

    2007-02-15

    Radiotherapy for pancreatic cancer is limited by the tolerance of local organs at risk (OARs) and frequent overlap of the planning target volume (PTV) and OAR volumes. Using lexicographic ordering (LO), a hierarchical optimization technique, with generalized equivalent uniform dose (gEUD) cost functions, we studied the potential of intensity modulated radiation therapy (IMRT) to increase the dose to pancreatic tumors and to areas of vascular involvement that preclude surgical resection [surgical boost volume (SBV)]. We compared 15 forward planned three-dimensional conformal (3DCRT) and IMRT treatment plans for locally advanced unresectable pancreatic cancer. We created IMRT plans optimized using LO with gEUD-based cost functions that account for the contribution of each part of the resulting inhomogeneous dose distribution. LO-IMRT plans allowed substantial PTV dose escalation compared with 3DCRT; median increase from 52 Gy to 66 Gy (a=-5,p<0.005) and median increase from 50 Gy to 59 Gy (a=-15,p<0.005). LO-IMRT also allowed increases to 85 Gy in the SBV, regardless of a value, along with significant dose reductions in OARs. We conclude that LO-IMRT with gEUD cost functions could allow dose escalation in pancreas tumors with concomitant reduction in doses to organs at risk as compared with traditional 3DCRT.

  20. Medical Device Risk Management For Performance Assurance Optimization and Prioritization.

    PubMed

    Gaamangwe, Tidimogo; Babbar, Vishvek; Krivoy, Agustina; Moore, Michael; Kresta, Petr

    2015-01-01

    Performance assurance (PA) is an integral component of clinical engineering medical device risk management. For that reason, the clinical engineering (CE) community has made concerted efforts to define appropriate risk factors and develop quantitative risk models for efficient data processing and improved PA program operational decision making. However, a common framework that relates the various processes of a quantitative risk system does not exist. This article provides a perspective that focuses on medical device quality and risk-based elements of the PA program, which include device inclusion/exclusion, schedule optimization, and inspection prioritization. A PA risk management framework is provided, and previous quantitative models that have contributed to the advancement of PA risk management are examined. A general model for quantitative risk systems is proposed, and further perspective on possible future directions in the area of PA technology is also provided. PMID:26618842

  1. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  2. Search-based optimization.

    PubMed

    Wheeler, Ward C

    2003-08-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. PMID:14531408

  3. Towards Risk Based Design for NASA's Missions

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Barrientos, Francesca; Meshkat, Leila

    2004-01-01

    This paper describes the concept of Risk Based Design in the context of NASA s low volume, high cost missions. The concept of accounting for risk in the design lifecycle has been discussed and proposed under several research topics, including reliability, risk analysis, optimization, uncertainty, decision-based design, and robust design. This work aims to identify and develop methods to enable and automate a means to characterize and optimize risk, and use risk as a tradeable resource to make robust and reliable decisions, in the context of the uncertain and ambiguous stage of early conceptual design. This paper first presents a survey of the related topics explored in the design research community as they relate to risk based design. Then, a summary of the topics from the NASA-led Risk Colloquium is presented, followed by current efforts within NASA to account for risk in early design. Finally, a list of "risk elements", identified for early-phase conceptual design at NASA, is presented. The purpose is to lay the foundation and develop a roadmap for future work and collaborations for research to eliminate and mitigate these risk elements in early phase design.

  4. Risk Analysis for Resource Planning Optimization

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming

    2008-01-01

    This paper describes a systems engineering approach to resource planning by integrating mathematical modeling and constrained optimization, empirical simulation, and theoretical analysis techniques to generate an optimal task plan in the presence of uncertainties.

  5. Differential effects of trait anger on optimism and risk behaviour.

    PubMed

    Pietruska, Karin; Armony, Jorge L

    2013-01-01

    It has been proposed that angry people exhibit optimistic risk estimates about future events and, consequently, are biased towards making risk-seeking choices. The goal of this study was to directly test the hypothesised effect of trait anger on optimism and risk-taking behaviour. One hundred healthy volunteers completed questionnaires about personality traits, optimism and risk behaviour. In addition their risk tendency was assessed with the Balloon Analogue Risk Task (BART), which provides an online measure of risk behaviour. Our results partly confirmed the relation between trait anger and outcome expectations of future life events, but suggest that this optimism does not necessarily translate into actual risk-seeking behaviour. PMID:22780446

  6. Cost tradeoffs in consequence management at nuclear power plants: A risk based approach to setting optimal long-term interdiction limits for regulatory analyses

    SciTech Connect

    Mubayi, V.

    1995-05-01

    The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ``long term interdiction limit,`` is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission.

  7. Research on optimization-based design

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Parkinson, A. R.; Free, J. C.

    1989-01-01

    Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.

  8. Optimal trading from minimizing the period of bankruptcy risk

    NASA Astrophysics Data System (ADS)

    Liehr, S.; Pawelzik, K.

    2001-04-01

    Assuming that financial markets behave similar to random walk processes we derive a trading strategy with variable investment which is based on the equivalence of the period of bankruptcy risk and the risk to profit ratio. We define a state dependent predictability measure which can be attributed to the deterministic and stochastic components of the price dynamics. The influence of predictability variations and especially of short term inefficiency structures on the optimal amount of investment is analyzed in the given context and a method for adaptation of a trading system to the proposed objective function is presented. Finally we show the performance of our trading strategy on the DAX and S&P 500 as examples for real world data using different types of prediction models in comparison.

  9. Optimized periodic verification testing blended risk and performance-based MOV inservice test program an application of ASME code case OMN-1

    SciTech Connect

    Sellers, C.; Fleming, K.; Bidwell, D.; Forbes, P.

    1996-12-01

    This paper presents an application of ASME Code Case OMN-1 to the GL 89-10 Program at the South Texas Project Electric Generating Station (STPEGS). Code Case OMN-1 provides guidance for a performance-based MOV inservice test program that can be used for periodic verification testing and allows consideration of risk insights. Blended probabilistic and deterministic evaluation techniques were used to establish inservice test strategies including both test methods and test frequency. Described in the paper are the methods and criteria for establishing MOV safety significance based on the STPEGS probabilistic safety assessment, deterministic considerations of MOV performance characteristics and performance margins, the expert panel evaluation process, and the development of inservice test strategies. Test strategies include a mix of dynamic and static testing as well as MOV exercising.

  10. Risk-optimized proton therapy to minimize radiogenic second cancers

    NASA Astrophysics Data System (ADS)

    Rechner, Laura A.; Eley, John G.; Howell, Rebecca M.; Zhang, Rui; Mirkovic, Dragan; Newhauser, Wayne D.

    2015-05-01

    Proton therapy confers substantially lower predicted risk of second cancer compared with photon therapy. However, no previous studies have used an algorithmic approach to optimize beam angle or fluence-modulation for proton therapy to minimize those risks. The objectives of this study were to demonstrate the feasibility of risk-optimized proton therapy and to determine the combination of beam angles and fluence weights that minimizes the risk of second cancer in the bladder and rectum for a prostate cancer patient. We used 6 risk models to predict excess relative risk of second cancer. Treatment planning utilized a combination of a commercial treatment planning system and an in-house risk-optimization algorithm. When normal-tissue dose constraints were incorporated in treatment planning, the risk model that incorporated the effects of fractionation, initiation, inactivation, repopulation and promotion selected a combination of anterior and lateral beams, which lowered the relative risk by 21% for the bladder and 30% for the rectum compared to the lateral-opposed beam arrangement. Other results were found for other risk models.

  11. Risk-optimized proton therapy to minimize radiogenic second cancers

    PubMed Central

    Rechner, Laura A.; Eley, John G.; Howell, Rebecca M.; Zhang, Rui; Mirkovic, Dragan; Newhauser, Wayne D.

    2015-01-01

    Proton therapy confers substantially lower predicted risk of second cancer compared with photon therapy. However, no previous studies have used an algorithmic approach to optimize beam angle or fluence-modulation for proton therapy to minimize those risks. The objectives of this study were to demonstrate the feasibility of risk-optimized proton therapy and to determine the combination of beam angles and fluence weights that minimize the risk of second cancer in the bladder and rectum for a prostate cancer patient. We used 6 risk models to predict excess relative risk of second cancer. Treatment planning utilized a combination of a commercial treatment planning system and an in-house risk-optimization algorithm. When normal-tissue dose constraints were incorporated in treatment planning, the risk model that incorporated the effects of fractionation, initiation, inactivation, and repopulation selected a combination of anterior and lateral beams, which lowered the relative risk by 21% for the bladder and 30% for the rectum compared to the lateral-opposed beam arrangement. Other results were found for other risk models. PMID:25919133

  12. Development of insurance terms based on risk assessment

    NASA Astrophysics Data System (ADS)

    Kosarev, Alexey; Nepp, Alexander; Nikonov, Oleg; Rushitskaja, Olga

    2013-10-01

    In the present work represented the technique of forming the insurance conditions based on risk assessment for industrial companies. Authors examine the issues of determining the optimal insurance coverage based on risk assessment. The assessment is based on the calculation Var of accidents. Var-indicators could help to determine optimal levels of deductibles and insurance rates. In paper are presents the results of practical testing of the method.

  13. Risk based management of piping systems

    SciTech Connect

    Conley, M.J.; Aller, J.E.; Tallin, A.; Weber, B.J.

    1996-07-01

    The API Piping Inspection Code is the first such Code to require classification of piping based on the consequences of failure, and to use this classification to influence inspection activity. Since this Code was published, progress has been made in the development of tools to improve on this approach by determining not only the consequences of failure, but also the likelihood of failure. ``Risk`` is defined as the product of the consequence and the likelihood. Measuring risk provides the means to formally manage risk by matching the inspection effort (costs) to the benefits of reduced risk. Using such a cost/benefit analysis allows the optimization of inspection budgets while meeting societal demands for reduction of the risk associated with process plant piping. This paper presents an overview of the tools developed to measure risk, and the methods to determine the effects of past and future inspections on the level of risk. The methodology is being developed as an industry-sponsored project under the direction of an API committee. The intent is to develop an API Recommended Practice that will be linked to In-Service Inspection Standards and the emerging Fitness for Service procedures. Actual studies using a similar approach have shown that a very high percentage of the risk due to piping in an operating facility is associated with relatively few pieces of piping. This permits inspection efforts to be focused on those piping systems that will result in the greatest risk reduction.

  14. Risk-based decisionmaking (Panel)

    SciTech Connect

    Smith, T.H.

    1995-12-31

    By means of a panel discussion and extensive audience interaction, explore the current challenges and progress to date in applying risk considerations to decisionmaking related to low-level waste. This topic is especially timely because of the proposed legislation pertaining to risk-based decisionmaking and because of the increased emphasis placed on radiological performance assessments of low-level waste disposal.

  15. Cancer risk assessment: Optimizing human health through linear dose-response models.

    PubMed

    Calabrese, Edward J; Shamoun, Dima Yazji; Hanekamp, Jaap C

    2015-07-01

    This paper proposes that generic cancer risk assessments be based on the integration of the Linear Non-Threshold (LNT) and hormetic dose-responses since optimal hormetic beneficial responses are estimated to occur at the dose associated with a 10(-4) risk level based on the use of a LNT model as applied to animal cancer studies. The adoption of the 10(-4) risk estimate provides a theoretical and practical integration of two competing risk assessment models whose predictions cannot be validated in human population studies or with standard chronic animal bioassay data. This model-integration reveals both substantial protection of the population from cancer effects (i.e. functional utility of the LNT model) while offering the possibility of significant reductions in cancer incidence should the hormetic dose-response model predictions be correct. The dose yielding the 10(-4) cancer risk therefore yields the optimized toxicologically based "regulatory sweet spot". PMID:25916915

  16. Optimal linear and nonlinear feature extraction based on the minimization of the increased risk of misclassification. [Bayes theorem - statistical analysis/data processing

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.

    1974-01-01

    General classes of nonlinear and linear transformations were investigated for the reduction of the dimensionality of the classification (feature) space so that, for a prescribed dimension m of this space, the increase of the misclassification risk is minimized.

  17. The Integration of LNT and Hormesis for Cancer Risk Assessment Optimizes Public Health Protection.

    PubMed

    Calabrese, Edward J; Shamoun, Dima Yazji; Hanekamp, Jaap C

    2016-03-01

    This paper proposes a new cancer risk assessment strategy and methodology that optimizes population-based responses by yielding the lowest disease/tumor incidence across the entire dose continuum. The authors argue that the optimization can be achieved by integrating two seemingly conflicting models; i.e., the linear no-threshold (LNT) and hormetic dose-response models. The integration would yield the optimized response at a risk of 10 with the LNT model. The integrative functionality of the LNT and hormetic dose response models provides an improved estimation of tumor incidence through model uncertainty analysis and major reductions in cancer incidence via hormetic model estimates. This novel approach to cancer risk assessment offers significant improvements over current risk assessment approaches by revealing a regulatory sweet spot that maximizes public health benefits while incorporating practical approaches for model validation. PMID:26808876

  18. Requirements based system risk modeling

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Cornford, Steven; Feather, Martin

    2004-01-01

    The problem that we address in this paper is assessing the expected degree of success of the system or mission based on the degree to which each requirement is satisfied and the relative weight of the requirements. We assume a complete list of the requirements, the relevant risk elements and their probability of occurrence and the quantified effect of the risk elements on the requirements. In order to assess the degree to which each requirement is satisfied, we need to determine the effect of the various risk elements on the requirement.

  19. Risk-based Spacecraft Fire Safety Experiments

    NASA Technical Reports Server (NTRS)

    Apostolakis, G.; Catton, I.; Issacci, F.; Paulos, T.; Jones, S.; Paxton, K.; Paul, M.

    1992-01-01

    Viewgraphs on risk-based spacecraft fire safety experiments are presented. Spacecraft fire risk can never be reduced to a zero probability. Probabilistic risk assessment is a tool to reduce risk to an acceptable level.

  20. Game Theory and Risk-Based Levee System Design

    NASA Astrophysics Data System (ADS)

    Hui, R.; Lund, J. R.; Madani, K.

    2014-12-01

    Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.

  1. Optimization-Based Models of Muscle Coordination

    PubMed Central

    Prilutsky, Boris I.; Zatsiorsky, Vladimir M.

    2010-01-01

    Optimization-based models may provide reasonably accurate estimates of activation and force patterns of individual muscles in selected well-learned tasks with submaximal efforts. Such optimization criteria as minimum energy expenditure, minimum muscle fatigue, and minimum sense of effort seem most promising. PMID:11800497

  2. Optimization-based models of muscle coordination.

    PubMed

    Prilutsky, Boris I; Zatsiorsky, Vladimir M

    2002-01-01

    Optimization-based models may provide reasonably accurate estimates of activation and force patterns of individual muscles in selected well-learned tasks with submaximal efforts. Such optimization criteria as minimum energy expenditure, minimum muscle fatigue, and minimum sense of effort seem most promising. PMID:11800497

  3. Algorithmic Differentiation for Calculus-based Optimization

    NASA Astrophysics Data System (ADS)

    Walther, Andrea

    2010-10-01

    For numerous applications, the computation and provision of exact derivative information plays an important role for optimizing the considered system but quite often also for its simulation. This presentation introduces the technique of Algorithmic Differentiation (AD), a method to compute derivatives of arbitrary order within working precision. Quite often an additional structure exploitation is indispensable for a successful coupling of these derivatives with state-of-the-art optimization algorithms. The talk will discuss two important situations where the problem-inherent structure allows a calculus-based optimization. Examples from aerodynamics and nano optics illustrate these advanced optimization approaches.

  4. Optimal Combination Treatment and Vascular Outcomes in Recent Ischemic Stroke Patients by Premorbid Risk Level

    PubMed Central

    Park, Jong-Ho; Ovbiagele, Bruce

    2015-01-01

    Background Optimal combination of secondary stroke prevention treatment including antihypertensives, antithrombotic agents, and lipid modifiers is associated with reduced recurrent vascular risk including stroke. It is unclear whether optimal combination treatment has a differential impact on stroke patients based on level of vascular risk. Methods We analyzed a clinical trial dataset comprising 3680 recent non-cardioembolic stroke patients aged ≥35 years and followed for 2 years. Patients were categorized by appropriateness level 0 to III depending on the number of the drugs prescribed divided by the number of drugs potentially indicated for each patient (0=none of the indicated medications prescribed and III=all indicated medications prescribed [optimal combination treatment]). High-risk was defined as having a history of stroke or coronary heart disease (CHD) prior to the index stroke event. Independent associations of medication appropriateness level with a major vascular event (stroke, CHD, or vascular death), ischemic stroke, and all-cause death were analyzed. Results Compared with level 0, for major vascular events, the HR of level III in the low-risk group was 0.51 (95% CI: 0.20–1.28) and 0.32 (0.14–0.70) in the high-risk group; for stroke, the HR of level III in the low-risk group was 0.54 (0.16–1.77) and 0.25 (0.08–0.85) in the high-risk group; and for all-cause death, the HR of level III in the low-risk group was 0.66 (0.09–5.00) and 0.22 (0.06–0.78) in the high-risk group. Conclusion Optimal combination treatment is related to a significantly lower risk of future vascular events and death among high-risk patients after a recent non-cardioembolic stroke. PMID:26044963

  5. Optimal CO2 mitigation under damage risk valuation

    NASA Astrophysics Data System (ADS)

    Crost, Benjamin; Traeger, Christian P.

    2014-07-01

    The current generation has to set mitigation policy under uncertainty about the economic consequences of climate change. This uncertainty governs both the level of damages for a given level of warming, and the steepness of the increase in damage per warming degree. Our model of climate and the economy is a stochastic version of a model employed in assessing the US Social Cost of Carbon (DICE). We compute the optimal carbon taxes and CO2 abatement levels that maximize welfare from economic consumption over time under different risk states. In accordance with recent developments in finance, we separate preferences about time and risk to improve the model's calibration of welfare to observed market interest. We show that introducing the modern asset pricing framework doubles optimal abatement and carbon taxation. Uncertainty over the level of damages at a given temperature increase can result in a slight increase of optimal emissions as compared to using expected damages. In contrast, uncertainty governing the steepness of the damage increase in temperature results in a substantially higher level of optimal mitigation.

  6. Vehicle Shield Optimization and Risk Assessment of Future NEO Missions

    NASA Technical Reports Server (NTRS)

    Nounu, Hatem, N.; Kim, Myung-Hee; Cucinotta, Francis A.

    2011-01-01

    Future human space missions target far destinations such as Near Earth Objects (NEO) or Mars that require extended stay in hostile radiation environments in deep space. The continuous assessment of exploration vehicles is needed to iteratively optimize the designs for shielding protection and calculating the risks associated with such long missions. We use a predictive software capability that calculates the risks to humans inside a spacecraft. The software uses the CAD software Pro/Engineer and Fishbowl tool kit to quantify the radiation shielding properties of the spacecraft geometry by calculating the areal density seen at a certain point, dose point, inside the spacecraft. The shielding results are used by NASA-developed software, BRYNTRN, to quantify the organ doses received in a human body located in the vehicle in a possible solar particle events (SPE) during such prolonged space missions. The organ doses are used to quantify the risks posed on the astronauts' health and life using NASA Space Cancer Model software. An illustration of the shielding optimization and risk calculation on an exploration vehicle design suitable for a NEO mission is provided in this study. The vehicle capsule is made of aluminum shell, airlock with hydrogen-rich carbon composite material end caps. The capsule contains sets of racks that surround a working and living area. A water shelter is provided in the middle of the vehicle to enhance the shielding in case of SPE. The mass distribution is optimized to minimize radiation hotspots and an assessment of the risks associated with a NEO mission is calculated.

  7. Optimization of multi-constrained structures based on optimality criteria

    NASA Technical Reports Server (NTRS)

    Rizzi, P.

    1976-01-01

    A weight-reduction algorithm is developed for the optimal design of structures subject to several multibehavioral inequality constraints. The structural weight is considered to depend linearly on the design variables. The algorithm incorporates a simple recursion formula derived from the Kuhn-Tucker necessary conditions for optimality, associated with a procedure to delete nonactive constraints based on the Gauss-Seidel iterative method for linear systems. A number of example problems is studied, including typical truss structures and simplified wings subject to static loads and with constraints imposed on stresses and displacements. For one of the latter structures, constraints on the fundamental natural frequency and flutter speed are also imposed. The results obtained show that the method is fast, efficient, and general when compared to other competing techniques. Extensions to the generality of the method to include equality constraints and nonlinear merit functions is discussed.

  8. MORT (Management Oversight and Risk Tree) based risk management

    SciTech Connect

    Briscoe, G.J.

    1990-02-01

    Risk Management is the optimization of safety programs. This requires a formal systems approach to hazards identification, risk quantification, and resource allocation/risk acceptance as opposed to case-by-case decisions. The Management Oversight and Risk Tree (MORT) has gained wide acceptance as a comprehensive formal systems approach covering all aspects of risk management. It (MORT) is a comprehensive analytical procedure that provides a disciplined method for determining the causes and contributing factors of major accidents. Alternatively, it serves as a tool to evaluate the quality of an existing safety system. While similar in many respects to fault tree analysis, MORT is more generalized and presents over 1500 specific elements of an ideal ''universal'' management program for optimizing occupational safety.

  9. Optimal quad-tree-based motion estimator

    NASA Astrophysics Data System (ADS)

    Schuster, Guido M.; Katsaggelos, Aggelos K.

    1996-09-01

    In this paper we propose an optimal quad-tree (QT)-based motion estimator for video compression. It is optimal in the sense that for a given bit budget for encoding the displacement vector field (DVF) and the QT segmentation, the scheme finds a DVF and a QT segmentation which minimizes the energy of the resulting displaced frame difference (DFD). We find the optimal QT decomposition and the optimal DVF jointly using the Lagrangian multiplier method and a multilevel dynamic program. The resulting DVF is spatially inhomogeneous since large blocks are used in areas with simple motion and small blocks in areas with complex motion. We present results with the proposed QT-based motion estimator which show that for the same DFD energy the proposed estimator uses about 30% fewer bits than the commonly used block matching algorithm.

  10. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  11. Risk analysis of heat recovery steam generator with semi quantitative risk based inspection API 581

    NASA Astrophysics Data System (ADS)

    Prayogo, Galang Sandy; Haryadi, Gunawan Dwi; Ismail, Rifky; Kim, Seon Jin

    2016-04-01

    Corrosion is a major problem that most often occurs in the power plant. Heat recovery steam generator (HRSG) is an equipment that has a high risk to the power plant. The impact of corrosion damage causing HRSG power plant stops operating. Furthermore, it could be threaten the safety of employees. The Risk Based Inspection (RBI) guidelines by the American Petroleum Institute (API) 58 has been used to risk analysis in the HRSG 1. By using this methodology, the risk that caused by unexpected failure as a function of the probability and consequence of failure can be estimated. This paper presented a case study relating to the risk analysis in the HRSG, starting with a summary of the basic principles and procedures of risk assessment and applying corrosion RBI for process industries. The risk level of each HRSG equipment were analyzed: HP superheater has a medium high risk (4C), HP evaporator has a medium-high risk (4C), and the HP economizer has a medium risk (3C). The results of the risk assessment using semi-quantitative method of standard API 581 based on the existing equipment at medium risk. In the fact, there is no critical problem in the equipment components. Damage mechanisms were prominent throughout the equipment is thinning mechanism. The evaluation of the risk approach was done with the aim of reducing risk by optimizing the risk assessment activities.

  12. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  13. Optimal Infomation-based Classification

    NASA Astrophysics Data System (ADS)

    Hyun, Baro

    Classification is the allocation of an object to an existing category among several based on uncertain measurements. Since information is used to quantify uncertainty, it is natural to consider classification and information as complementary subjects. This dissertation touches upon several topics that relate to the problem of classification, such as information, classification, and team classification. Motivated by the U.S. Air Force Intelligence, Surveillance, and Reconnaissance missions, we investigate the aforementioned topics for classifiers that follow two models: classifiers with workload-independent and workload-dependent performance. We adopt workload-independence and dependence as "first-order" models to capture the features of machines and humans, respectively. We first investigate the relationship between information in the sense of Shannon and classification performance, which is defined as the probability of misclassification. We show that while there is a predominant congruence between them, there are cases when such congruence is violated. We show the phenomenon for both workload-independent and workload-dependent classifiers and investigate the cause of such phenomena analytically. One way of making classification decisions is by setting a threshold on a measured quantity. For instance, if a measurement falls on one side of the threshold, the object that provided the measurement is classified as one type, otherwise, it is of another type. Exploiting thresholding, we formalize a classifier with dichotomous decisions (i.e., with two options, such as true or false) given a single variable measurement. We further extend the formalization to classifiers with trichotomy (i.e., with three options, such as true, false or unknown) and with multivariate measurements. When a team of classifiers is considered, issues on how to exploit redundant numbers of classifiers arise. We analyze these classifiers under different architectures, such as parallel or nested

  14. Estimation of the Optimal Statistical Quality Control Sampling Time Intervals Using a Residual Risk Measure

    PubMed Central

    Hatjimihail, Aristides T.

    2009-01-01

    Background An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC) procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. Methodology/Principal Findings Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. Conclusions/Significance It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed. PMID:19513124

  15. Inspection-Repair based Availability Optimization of Distribution Systems using Teaching Learning based Optimization

    NASA Astrophysics Data System (ADS)

    Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.

    2015-03-01

    This paper describes a technique for optimizing inspection and repair based availability of distribution systems. Optimum duration between two inspections has been obtained for each feeder section with respect to cost function and subject to satisfaction of availability at each load point. Teaching learning based optimization has been used for availability optimization. The developed algorithm has been implemented on radial and meshed distribution systems. The result obtained has been compared with those obtained with differential evolution.

  16. Shape optimization of pulsatile ventricular assist devices using FSI to minimize thrombotic risk

    NASA Astrophysics Data System (ADS)

    Long, C. C.; Marsden, A. L.; Bazilevs, Y.

    2014-10-01

    In this paper we perform shape optimization of a pediatric pulsatile ventricular assist device (PVAD). The device simulation is carried out using fluid-structure interaction (FSI) modeling techniques within a computational framework that combines FEM for fluid mechanics and isogeometric analysis for structural mechanics modeling. The PVAD FSI simulations are performed under realistic conditions (i.e., flow speeds, pressure levels, boundary conditions, etc.), and account for the interaction of air, blood, and a thin structural membrane separating the two fluid subdomains. The shape optimization study is designed to reduce thrombotic risk, a major clinical problem in PVADs. Thrombotic risk is quantified in terms of particle residence time in the device blood chamber. Methods to compute particle residence time in the context of moving spatial domains are presented in a companion paper published in the same issue (Comput Mech, doi: 10.1007/s00466-013-0931-y, 2013). The surrogate management framework, a derivative-free pattern search optimization method that relies on surrogates for increased efficiency, is employed in this work. For the optimization study shown here, particle residence time is used to define a suitable cost or objective function, while four adjustable design optimization parameters are used to define the device geometry. The FSI-based optimization framework is implemented in a parallel computing environment, and deployed with minimal user intervention. Using five SEARCH/ POLL steps the optimization scheme identifies a PVAD design with significantly better throughput efficiency than the original device.

  17. Optimal dividends in the Brownian motion risk model with interest

    NASA Astrophysics Data System (ADS)

    Fang, Ying; Wu, Rong

    2009-07-01

    In this paper, we consider a Brownian motion risk model, and in addition, the surplus earns investment income at a constant force of interest. The objective is to find a dividend policy so as to maximize the expected discounted value of dividend payments. It is well known that optimality is achieved by using a barrier strategy for unrestricted dividend rate. However, ultimate ruin of the company is certain if a barrier strategy is applied. In many circumstances this is not desirable. This consideration leads us to impose a restriction on the dividend stream. We assume that dividends are paid to the shareholders according to admissible strategies whose dividend rate is bounded by a constant. Under this additional constraint, we show that the optimal dividend strategy is formed by a threshold strategy.

  18. Risk-based system refinement

    SciTech Connect

    Winter, V.L.; Berg, R.S.; Dalton, L.J.

    1998-06-01

    When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.

  19. Risk-based planning analysis for a single levee

    NASA Astrophysics Data System (ADS)

    Hui, Rui; Jachens, Elizabeth; Lund, Jay

    2016-04-01

    Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.

  20. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  1. Estimating vegetation dryness to optimize fire risk assessment with spot vegetation satellite data in savanna ecosystems

    NASA Astrophysics Data System (ADS)

    Verbesselt, J.; Somers, B.; Lhermitte, S.; van Aardt, J.; Jonckheere, I.; Coppin, P.

    2005-10-01

    The lack of information on vegetation dryness prior to the use of fire as a management tool often leads to a significant deterioration of the savanna ecosystem. This paper therefore evaluated the capacity of SPOT VEGETATION time-series to monitor the vegetation dryness (i.e., vegetation moisture content per vegetation amount) in order to optimize fire risk assessment in the savanna ecosystem of Kruger National Park in South Africa. The integrated Relative Vegetation Index approach (iRVI) to quantify the amount of herbaceous biomass at the end of the rain season and the Accumulated Relative Normalized Difference vegetation index decrement (ARND) related to vegetation moisture content were selected. The iRVI and ARND related to vegetation amount and moisture content, respectively, were combined in order to monitor vegetation dryness and optimize fire risk assessment in the savanna ecosystems. In situ fire activity data was used to evaluate the significance of the iRVI and ARND to monitor vegetation dryness for fire risk assessment. Results from the binary logistic regression analysis confirmed that the assessment of fire risk was optimized by integration of both the vegetation quantity (iRVI) and vegetation moisture content (ARND) as statistically significant explanatory variables. Consequently, the integrated use of both iRVI and ARND to monitor vegetation dryness provides a more suitable tool for fire management and suppression compared to other traditional satellite-based fire risk assessment methods, only related to vegetation moisture content.

  2. Local, Optimization-based Simplicial Mesh Smoothing

    Energy Science and Technology Software Center (ESTSC)

    1999-12-09

    OPT-MS is a C software package for the improvement and untangling of simplicial meshes (triangles in 2D, tetrahedra in 3D). Overall mesh quality is improved by iterating over the mesh vertices and adjusting their position to optimize some measure of mesh quality, such as element angle or aspect ratio. Several solution techniques (including Laplacian smoothing, "Smart" Laplacian smoothing, optimization-based smoothing and several combinations thereof) and objective functions (for example, element angle, sin (angle), and aspectmore » ratio) are available to the user for both two and three-dimensional meshes. If the mesh contains invalid elements (those with negative area) a different optimization algorithm for mesh untangling is provided.« less

  3. Assessment of Medical Risks and Optimization of their Management using Integrated Medical Model

    NASA Technical Reports Server (NTRS)

    Fitts, Mary A.; Madurai, Siram; Butler, Doug; Kerstman, Eric; Risin, Diana

    2008-01-01

    The Integrated Medical Model (IMM) Project is a software-based technique that will identify and quantify the medical needs and health risks of exploration crew members during space flight and evaluate the effectiveness of potential mitigation strategies. The IMM Project employs an evidence-based approach that will quantify probability and consequences of defined in-flight medical risks, mitigation strategies, and tactics to optimize crew member health. Using stochastic techniques, the IMM will ultimately inform decision makers at both programmatic and institutional levels and will enable objective assessment of crew health and optimization of mission success using data from relevant cohort populations and from the astronaut population. The objectives of the project include: 1) identification and documentation of conditions that may occur during exploration missions (Baseline Medical Conditions List [BMCL), 2) assessment of the likelihood of conditions in the BMCL occurring during exploration missions (incidence rate), 3) determination of the risk associated with these conditions and quantify in terms of end states (Loss of Crew, Loss of Mission, Evacuation), 4) optimization of in-flight hardware mass, volume, power, bandwidth and cost for a given level of risk or uncertainty, and .. validation of the methodologies used.

  4. A risk-reduction approach for optimal software release time determination with the delay incurred cost

    NASA Astrophysics Data System (ADS)

    Peng, Rui; Li, Yan-Fu; Zhang, Jun-Guang; Li, Xiang

    2015-07-01

    Most existing research on software release time determination assumes that parameters of the software reliability model (SRM) are deterministic and the reliability estimate is accurate. In practice, however, there exists a risk that the reliability requirement cannot be guaranteed due to the parameter uncertainties in the SRM, and such risk can be as high as 50% when the mean value is used. It is necessary for the software project managers to reduce the risk to a lower level by delaying the software release, which inevitably increases the software testing costs. In order to incorporate the managers' preferences over these two factors, a decision model based on multi-attribute utility theory (MAUT) is developed for the determination of optimal risk-reduction release time.

  5. LP based approach to optimal stable matchings

    SciTech Connect

    Teo, Chung-Piaw; Sethuraman, J.

    1997-06-01

    We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem. This formulation is non-empty if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique uses a crucial geometry of the fractional solutions in this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as convex combination of stable marriage solutions. This leads to a genuinely simple proof of the integrality of the stable marriage polytope. Based on these ideas, we devise a heuristic to solve the optimal stable roommates problem. The heuristic combines the power of rounding and cutting-plane methods. We present some computational results based on preliminary implementations of this heuristic.

  6. EUD-based biological optimization for carbon ion therapy

    SciTech Connect

    Brüningk, Sarah C. Kamp, Florian; Wilkens, Jan J.

    2015-11-15

    therapy, the optimization by biological objective functions resulted in slightly superior treatment plans in terms of final EUD for the organs at risk (OARs) compared to voxel-based optimization approaches. This observation was made independent of the underlying objective function metric. An absolute gain in OAR sparing was observed for quadratic objective functions, whereas intersecting DVHs were found for logistic approaches. Even for considerable under- or overestimations of the used effect- or dose–volume parameters during the optimization, treatment plans were obtained that were of similar quality as the results of a voxel-based optimization. Conclusions: EUD-based optimization with either of the presented concepts can successfully be applied to treatment plan optimization. This makes EUE-based optimization for carbon ion therapy a useful tool to optimize more specifically in the sense of biological outcome while voxel-to-voxel variations of the biological effectiveness are still properly accounted for. This may be advantageous in terms of computational cost during treatment plan optimization but also enables a straight forward comparison of different fractionation schemes or treatment modalities.

  7. Base distance optimization for SQUID gradiometers

    SciTech Connect

    Garachtchenko, A.; Matlashov, A.; Kraus, R.

    1998-12-31

    The measurement of magnetic fields generated by weak nearby biomagnetic sources is affected by ambient noise generated by distant sources both internal and external to the subject under study. External ambient noise results from sources with numerous origins, many of which are unpredictable in nature. Internal noise sources are biomagnetic in nature and result from muscle activity (such as the heart, eye blinks, respiration, etc.), pulsation associated with blood flow, surgical implants, etc. Any magnetic noise will interfere with measurements of magnetic sources of interest, such as magnetoencephalography (MEG), in various ways. One of the most effective methods of reducing the magnetic noise measured by the SQUID sensor is to use properly designed superconducting gradiometers. Here, the authors optimized the baseline length of SQUID-based symmetric axial gradiometers using computer simulation. The signal-to-noise ratio (SNR) was used as the optimization criteria. They found that in most cases the optimal baseline is not equal to the depth of the primary source, rather it has a more complex dependence on the gradiometer balance and the ambient magnetic noise. They studied both first and second order gradiometers in simulated shielded environments and only second order gradiometers in a simulated unshielded environment. The noise source was simulated as a distant dipolar source for the shielded cases. They present optimal gradiometer baseline lengths for the various simulated situations below.

  8. Optimal network solution for proactive risk assessment and emergency response

    NASA Astrophysics Data System (ADS)

    Cai, Tianxing

    Coupled with the continuous development in the field industrial operation management, the requirement for operation optimization in large scale manufacturing network has provoked more interest in the research field of engineering. Compared with the traditional way to take the remedial measure after the occurrence of the emergency event or abnormal situation, the current operation control calls for more proactive risk assessment to set up early warning system and comprehensive emergency response planning. Among all the industries, chemical industry and energy industry have higher opportunity to face with the abnormal and emergency situations due to their own industry characterization. Therefore the purpose of the study is to develop methodologies to give aid in emergency response planning and proactive risk assessment in the above two industries. The efficacy of the developed methodologies is demonstrated via two industrial real problems. The first case is to handle energy network dispatch optimization under emergency of local energy shortage under extreme conditions such as earthquake, tsunami, and hurricane, which may cause local areas to suffer from delayed rescues, widespread power outages, tremendous economic losses, and even public safety threats. In such urgent events of local energy shortage, agile energy dispatching through an effective energy transportation network, targeting the minimum energy recovery time, should be a top priority. The second case is a scheduling methodology to coordinate multiple chemical plants' start-ups in order to minimize regional air quality impacts under extreme meteorological conditions. The objective is to reschedule multi-plant start-up sequence to achieve the minimum sum of delay time compared to the expected start-up time of each plant. All these approaches can provide quantitative decision support for multiple stake holders, including government and environment agencies, chemical industry, energy industry and local

  9. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    NASA Astrophysics Data System (ADS)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2016-03-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  10. Optimizing footwear for older people at risk of falls.

    PubMed

    Menant, Jasmine C; Steele, Julie R; Menz, Hylton B; Munro, Bridget J; Lord, Stephen R

    2008-01-01

    Footwear influences balance and the subsequent risk of slips, trips, and falls by altering somatosensory feedback to the foot and ankle and modifying frictional conditions at the shoe/floor interface. Walking indoors barefoot or in socks and walking indoors or outdoors in high-heel shoes have been shown to increase the risk of falls in older people. Other footwear characteristics such as heel collar height, sole hardness, and tread and heel geometry also influence measures of balance and gait. Because many older people wear suboptimal shoes, maximizing safe shoe use may offer an effective fall prevention strategy. Based on findings of a systematic literature review, older people should wear shoes with low heels and firm slip-resistant soles both inside and outside the home. Future research should investigate the potential benefits of tread sole shoes for preventing slips and whether shoes with high collars or flared soles can enhance balance when challenging tasks are undertaken. PMID:19235118

  11. Optimizing the plant-based diet.

    PubMed

    Mann, J I

    2000-09-01

    Any attempt to optimize a plant-based diet necessitates an identification of the features of the diet which confer benefit as well as any which may be associated with detrimental effects. The former task is more difficult than might be assumed as there is no doubt that some of the apparent health benefits observed amongst vegetarians are a consequence of environmental determinants of health which characterize groups of people who choose vegetarian diets, rather than dietary practices. This review will consider the major health benefits of plant-based diets, the specific foods or nutrients which confer the benefits as far as can be ascertained from present knowledge, potential nutrient deficiencies associated with a plant-based diet and nutritional strategies that can be employed to prevent any such deficiencies. PMID:24398280

  12. Optimal halftoning for network-based imaging

    NASA Astrophysics Data System (ADS)

    Ostromoukhov, Victor

    2000-12-01

    In this contribution, we introduce a multiple depth progressive representation for network-based still and moving images. A simple quantization algorithm associated with this representation provides optimal image quality. By optimum, we mean the best possible visual quality for a given value of information under real life constraints such as physical, psychological , or legal constraints. A special variant of the algorithm, multi-depth coherent error diffusion, addresses a specific problem of temporal coherence between frames in moving images. The output produced with our algorithm is visually pleasant because its Fourier spectrum is close to the 'blue noise'.

  13. GPU-based ultrafast IMRT plan optimization

    NASA Astrophysics Data System (ADS)

    Men, Chunhua; Gu, Xuejun; Choi, Dongju; Majumdar, Amitava; Zheng, Ziyi; Mueller, Klaus; Jiang, Steve B.

    2009-11-01

    The widespread adoption of on-board volumetric imaging in cancer radiotherapy has stimulated research efforts to develop online adaptive radiotherapy techniques to handle the inter-fraction variation of the patient's geometry. Such efforts face major technical challenges to perform treatment planning in real time. To overcome this challenge, we are developing a supercomputing online re-planning environment (SCORE) at the University of California, San Diego (UCSD). As part of the SCORE project, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) optimization algorithm on graphics processing units (GPUs). We adopt a penalty-based quadratic optimization model, which is solved by using a gradient projection method with Armijo's line search rule. Our optimization algorithm has been implemented in CUDA for parallel GPU computing as well as in C for serial CPU computing for comparison purpose. A prostate IMRT case with various beamlet and voxel sizes was used to evaluate our implementation. On an NVIDIA Tesla C1060 GPU card, we have achieved speedup factors of 20-40 without losing accuracy, compared to the results from an Intel Xeon 2.27 GHz CPU. For a specific nine-field prostate IMRT case with 5 × 5 mm2 beamlet size and 2.5 × 2.5 × 2.5 mm3 voxel size, our GPU implementation takes only 2.8 s to generate an optimal IMRT plan. Our work has therefore solved a major problem in developing online re-planning technologies for adaptive radiotherapy.

  14. GPU-based ultrafast IMRT plan optimization.

    PubMed

    Men, Chunhua; Gu, Xuejun; Choi, Dongju; Majumdar, Amitava; Zheng, Ziyi; Mueller, Klaus; Jiang, Steve B

    2009-11-01

    The widespread adoption of on-board volumetric imaging in cancer radiotherapy has stimulated research efforts to develop online adaptive radiotherapy techniques to handle the inter-fraction variation of the patient's geometry. Such efforts face major technical challenges to perform treatment planning in real time. To overcome this challenge, we are developing a supercomputing online re-planning environment (SCORE) at the University of California, San Diego (UCSD). As part of the SCORE project, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) optimization algorithm on graphics processing units (GPUs). We adopt a penalty-based quadratic optimization model, which is solved by using a gradient projection method with Armijo's line search rule. Our optimization algorithm has been implemented in CUDA for parallel GPU computing as well as in C for serial CPU computing for comparison purpose. A prostate IMRT case with various beamlet and voxel sizes was used to evaluate our implementation. On an NVIDIA Tesla C1060 GPU card, we have achieved speedup factors of 20-40 without losing accuracy, compared to the results from an Intel Xeon 2.27 GHz CPU. For a specific nine-field prostate IMRT case with 5 x 5 mm(2) beamlet size and 2.5 x 2.5 x 2.5 mm(3) voxel size, our GPU implementation takes only 2.8 s to generate an optimal IMRT plan. Our work has therefore solved a major problem in developing online re-planning technologies for adaptive radiotherapy. PMID:19826201

  15. Cytogenetic bases for risk inference

    SciTech Connect

    Bender, M A

    1980-01-01

    Various enviromental pollutants are suspected of being capable of causing cancers or genetic defects even at low levels of exposure. In order to estimate risk from exposure to these pollutants, it would be useful to have some indicator of exposure. It is suggested that chromosomes are ideally suited for this purpose. Through the phenonema of chromosome aberrations and sister chromatid exchanges (SCE), chromosomes respond to virtually all carcinogens and mutagens. Aberrations and SCE are discussed in the context of their use as indicators of increased risk to health by chemical pollutants. (ACR)

  16. Risk based ASME Code requirements

    SciTech Connect

    Gore, B.F.; Vo, T.V.; Balkey, K.R.

    1992-09-01

    The objective of this ASME Research Task Force is to develop and to apply a methodology for incorporating quantitative risk analysis techniques into the definition of in-service inspection (ISI) programs for a wide range of industrial applications. An additional objective, directed towards the field of nuclear power generation, is ultimately to develop a recommendation for comprehensive revisions to the ISI requirements of Section XI of the ASME Boiler and Pressure Vessel Code. This will require development of a firm technical basis for such requirements, which does not presently exist. Several years of additional research will be required before this can be accomplished. A general methodology suitable for application to any industry has been defined and published. It has recently been refined and further developed during application to the field of nuclear power generation. In the nuclear application probabilistic risk assessment (PRA) techniques and information have been incorporated. With additional analysis, PRA information is used to determine the consequence of a component rupture (increased reactor core damage probability). A procedure has also been recommended for using the resulting quantified risk estimates to determine target component rupture probability values to be maintained by inspection activities. Structural risk and reliability analysis (SRRA) calculations are then used to determine characteristics which an inspection strategy must posess in order to maintain component rupture probabilities below target values. The methodology, results of example applications, and plans for future work are discussed.

  17. Risk-Based Sampling: I Don't Want to Weight in Vain.

    PubMed

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. PMID:26033352

  18. Optimal caching algorithm based on dynamic programming

    NASA Astrophysics Data System (ADS)

    Guo, Changjie; Xiang, Zhe; Zhong, Yuzhuo; Long, Jidong

    2001-07-01

    With the dramatic growth of multimedia streams, the efficient distribution of stored videos has become a major concern. There are two basic caching strategies: the whole caching strategy and the caching strategy based on layered encoded video, the latter can satisfy the requirement of the highly heterogeneous access to the Internet. Conventional caching strategies assign each object a cache gain by calculating popularity or density popularity, and determine which videos and which layers should be cached. In this paper, we first investigate the delivery model of stored video based on proxy, and propose two novel caching algorithms, DPLayer (for layered encoded caching scheme) and DPWhole (for whole caching scheme) for multimedia proxy caching. The two algorithms are based on the resource allocation model of dynamic programming to select the optimal subset of objects to be cached in proxy. Simulation proved that our algorithms achieve better performance than other existing schemes. We also analyze the computational complexity and space complexity of the algorithms, and introduce a regulative parameter to compress the states space of the dynamic programming problem and reduce the complexity of algorithms.

  19. Risk-sensitive optimal feedback control accounts for sensorimotor behavior under uncertainty.

    PubMed

    Nagengast, Arne J; Braun, Daniel A; Wolpert, Daniel M

    2010-01-01

    Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. PMID:20657657

  20. Risk-Sensitive Optimal Feedback Control Accounts for Sensorimotor Behavior under Uncertainty

    PubMed Central

    Nagengast, Arne J.; Braun, Daniel A.; Wolpert, Daniel M.

    2010-01-01

    Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. PMID:20657657

  1. Risk based inspection for atmospheric storage tank

    NASA Astrophysics Data System (ADS)

    Nugroho, Agus; Haryadi, Gunawan Dwi; Ismail, Rifky; Kim, Seon Jin

    2016-04-01

    Corrosion is an attack that occurs on a metallic material as a result of environment's reaction.Thus, it causes atmospheric storage tank's leakage, material loss, environmental pollution, equipment failure and affects the age of process equipment then finally financial damage. Corrosion risk measurement becomesa vital part of Asset Management at the plant for operating any aging asset.This paper provides six case studies dealing with high speed diesel atmospheric storage tank parts at a power plant. A summary of the basic principles and procedures of corrosion risk analysis and RBI applicable to the Process Industries were discussed prior to the study. Semi quantitative method based onAPI 58I Base-Resource Document was employed. The risk associated with corrosion on the equipment in terms of its likelihood and its consequences were discussed. The corrosion risk analysis outcome used to formulate Risk Based Inspection (RBI) method that should be a part of the atmospheric storage tank operation at the plant. RBI gives more concern to inspection resources which are mostly on `High Risk' and `Medium Risk' criteria and less on `Low Risk' shell. Risk categories of the evaluated equipment were illustrated through case study analysis outcome.

  2. Time-response-based evolutionary optimization

    NASA Astrophysics Data System (ADS)

    Avigad, Gideon; Goldvard, Alex; Salomon, Shaul

    2015-04-01

    Solutions to engineering problems are often evaluated by considering their time responses; thus, each solution is associated with a function. To avoid optimizing the functions, such optimization is usually carried out by setting auxiliary objectives (e.g. minimal overshoot). Therefore, in order to find different optimal solutions, alternative auxiliary optimization objectives may have to be defined prior to optimization. In the current study, a new approach is suggested that avoids the need to define auxiliary objectives. An algorithm is suggested that enables the optimization of solutions according to their transient behaviours. For this optimization, the functions are sampled and the problem is posed as a multi-objective problem. The recently introduced algorithm NSGA-II-PSA is adopted and tailored to solve it. Mathematical as well as engineering problems are utilized to explain and demonstrate the approach and its applicability to real life problems. The results highlight the advantages of avoiding the definition of artificial objectives.

  3. Multi-Point Combinatorial Optimization Method with Distance Based Interaction

    NASA Astrophysics Data System (ADS)

    Yasuda, Keiichiro; Jinnai, Hiroyuki; Ishigame, Atsushi

    This paper proposes a multi-point combinatorial optimization method based on Proximate Optimality Principle (POP), which method has several advantages for solving large-scale combinatorial optimization problems. The proposed algorithm uses not only the distance between search points but also the interaction among search points in order to utilize POP in several types of combinatorial optimization problems. The proposed algorithm is applied to several typical combinatorial optimization problems, a knapsack problem, a traveling salesman problem, and a flow shop scheduling problem, in order to verify the performance of the proposed algorithm. The simulation results indicate that the proposed method has higher optimality than the conventional combinatorial optimization methods.

  4. Community based intervention to optimize osteoporosis management: randomized controlled trial

    PubMed Central

    2010-01-01

    Background Osteoporosis-related fractures are a significant public health concern. Interventions that increase detection and treatment of osteoporosis are underutilized. This pragmatic randomised study was done to evaluate the impact of a multifaceted community-based care program aimed at optimizing evidence-based management in patients at risk for osteoporosis and fractures. Methods This was a 12-month randomized trial performed in Ontario, Canada. Eligible patients were community-dwelling, aged ≥55 years, and identified to be at risk for osteoporosis-related fractures. Two hundred and one patients were allocated to the intervention group or to usual care. Components of the intervention were directed towards primary care physicians and patients and included facilitated bone mineral density testing, patient education and patient-specific recommendations for osteoporosis treatment. The primary outcome was the implementation of appropriate osteoporosis management. Results 101 patients were allocated to intervention and 100 to control. Mean age of participants was 71.9 ± 7.2 years and 94% were women. Pharmacological treatment (alendronate, risedronate, or raloxifene) for osteoporosis was increased by 29% compared to usual care (56% [29/52] vs. 27% [16/60]; relative risk [RR] 2.09, 95% confidence interval [CI] 1.29 to 3.40). More individuals in the intervention group were taking calcium (54% [54/101] vs. 20% [20/100]; RR 2.67, 95% CI 1.74 to 4.12) and vitamin D (33% [33/101] vs. 20% [20/100]; RR 1.63, 95% CI 1.01 to 2.65). Conclusions A multi-faceted community-based intervention improved management of osteoporosis in high risk patients compared with usual care. Trial Registration This trial has been registered with clinicaltrials.gov (ID: NCT00465387) PMID:20799973

  5. CFD based draft tube hydraulic design optimization

    NASA Astrophysics Data System (ADS)

    McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.

    2014-03-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a

  6. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  7. Risk-Based Comparison of Carbon Capture Technologies

    SciTech Connect

    Engel, David W.; Dalton, Angela C.; Dale, Crystal; Jones, Edward

    2013-05-01

    In this paper, we describe an integrated probabilistic risk assessment methodological framework and a decision-support tool suite for implementing systematic comparisons of competing carbon capture technologies. Culminating from a collaborative effort among national laboratories under the Carbon Capture Simulation Initiative (CCSI), the risk assessment framework and the decision-support tool suite encapsulate three interconnected probabilistic modeling and simulation components. The technology readiness level (TRL) assessment component identifies specific scientific and engineering targets required by each readiness level and applies probabilistic estimation techniques to calculate the likelihood of graded as well as nonlinear advancement in technology maturity. The technical risk assessment component focuses on identifying and quantifying risk contributors, especially stochastic distributions for significant risk contributors, performing scenario-based risk analysis, and integrating with carbon capture process model simulations and optimization. The financial risk component estimates the long-term return on investment based on energy retail pricing, production cost, operating and power replacement cost, plan construction and retrofit expenses, and potential tax relief, expressed probabilistically as the net present value distributions over various forecast horizons.

  8. Risk-Based Object Oriented Testing

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda H.; Stapko, Ruth; Gallo, Albert

    2000-01-01

    Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible.

  9. Science, science policy, and risk-based management

    SciTech Connect

    Midgley, L.P.

    1997-09-01

    Recent national awareness of the economic infeasibility of remediating hazardous waste sites to background levels has sparked increased interest in the role of science policy in the environmental risk assessment and risk management process. As individual states develop guidelines for addressing environmental risks at hazardous waste sites, the role of science policy decisions and uncertainty must be carefully evaluated to achieve long-term environmental goals and solutions that are economically feasible and optimally beneficial to all stakeholders. Amendment to Oregon Revised Statute 465.315 establishes policy and Utah Cleanup Action and Risk-Based Closure Standards sets requirements for risk-based cleanup and closure at sites where remediation or removal of hazardous constituents to background levels will not be achieved. This paper discusses the difficulties in effectively integrating potential current and future impacts on human health and the environment, technical feasibility, economic considerations, and political realities into environmental policy and standards, using these references as models. This paper considers the role of both objective and subjective criteria in the risk-based closure and management processes and makes suggestions for improving the system by which these sites may be reclaimed.

  10. 12 CFR Appendix B to Part 3 - Risk-Based Capital Guidelines; Market Risk

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Risk-Based Capital Guidelines; Market Risk B... ADEQUACY STANDARDS Pt. 3, App. B Appendix B to Part 3—Risk-Based Capital Guidelines; Market Risk Section... Application of the Market Risk Capital Rule Section 4Adjustments to the Risk-Based Capital Ratio...

  11. Risk Classification and Risk-based Safety and Mission Assurance

    NASA Technical Reports Server (NTRS)

    Leitner, Jesse A.

    2014-01-01

    Recent activities to revamp and emphasize the need to streamline processes and activities for Class D missions across the agency have led to various interpretations of Class D, including the lumping of a variety of low-cost projects into Class D. Sometimes terms such as Class D minus are used. In this presentation, mission risk classifications will be traced to official requirements and definitions as a measure to ensure that projects and programs align with the guidance and requirements that are commensurate for their defined risk posture. As part of this, the full suite of risk classifications, formal and informal will be defined, followed by an introduction to the new GPR 8705.4 that is currently under review.GPR 8705.4 lays out guidance for the mission success activities performed at the Classes A-D for NPR 7120.5 projects as well as for projects not under NPR 7120.5. Furthermore, the trends in stepping from Class A into higher risk posture classifications will be discussed. The talk will conclude with a discussion about risk-based safety and mission assuranceat GSFC.

  12. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  13. Temporal variation of optimal UV exposure time over Korea: risks and benefits of surface UV radiation

    NASA Astrophysics Data System (ADS)

    Lee, Y. G.; Koo, J. H.

    2015-12-01

    Solar UV radiation in a wavelength range between 280 to 400 nm has both positive and negative influences on human body. Surface UV radiation is the main natural source of vitamin D, providing the promotion of bone and musculoskeletal health and reducing the risk of a number of cancers and other medical conditions. However, overexposure to surface UV radiation is significantly related with the majority of skin cancer, in addition other negative health effects such as sunburn, skin aging, and some forms of eye cataracts. Therefore, it is important to estimate the optimal UV exposure time, representing a balance between reducing negative health effects and maximizing sufficient vitamin D production. Previous studies calculated erythemal UV and vitamin-D UV from the measured and modelled spectral irradiances, respectively, by weighting CIE Erythema and Vitamin D3 generation functions (Kazantzidis et al., 2009; Fioletov et al., 2010). In particular, McKenzie et al. (2009) suggested the algorithm to estimate vitamin-D production UV from erythemal UV (or UV index) and determined the optimum conditions of UV exposure based on skin type Ⅱ according to the Fitzpatrick (1988). Recently, there are various demands for risks and benefits of surface UV radiation on public health over Korea, thus it is necessary to estimate optimal UV exposure time suitable to skin type of East Asians. This study examined the relationship between erythemally weighted UV (UVEry) and vitamin D weighted UV (UVVitD) over Korea during 2004-2012. The temporal variations of the ratio (UVVitD/UVEry) were also analyzed and the ratio as a function of UV index was applied in estimating the optimal UV exposure time. In summer with high surface UV radiation, short exposure time leaded to sufficient vitamin D and erythema and vice versa in winter. Thus, the balancing time in winter was enough to maximize UV benefits and minimize UV risks.

  14. Optimizing Assurance: The Risk Regulation System in Relationships

    ERIC Educational Resources Information Center

    Murray, Sandra L.; Holmes, John G.; Collins, Nancy L.

    2006-01-01

    A model of risk regulation is proposed to explain how people balance the goal of seeking closeness to a romantic partner against the opposing goal of minimizing the likelihood and pain of rejection. The central premise is that confidence in a partner's positive regard and caring allows people to risk seeking dependence and connectedness. The risk…

  15. Model-based optimization of ultrasonic transducers.

    PubMed

    Heikkola, Erkki; Laitinen, Mika

    2005-01-01

    Numerical simulation and automated optimization of Langevin-type ultrasonic transducers are investigated. These kind of transducers are standard components in various applications of high-power ultrasonics such as ultrasonic cleaning and chemical processing. Vibration of the transducer is simulated numerically by standard finite element method and the dimensions and shape parameters of a transducer are optimized with respect to different criteria. The novelty value of this work is the combination of the simulation model and the optimization problem by efficient automatic differentiation techniques. The capabilities of this approach are demonstrated with practical test cases in which various aspects of the operation of a transducer are improved. PMID:15474952

  16. Risk-based and deterministic regulation

    SciTech Connect

    Fischer, L.E.; Brown, N.W.

    1995-07-01

    Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose.

  17. 12 CFR 652.70 - Risk-based capital level.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Risk-based capital level. 652.70 Section 652.70... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.70 Risk-based capital level. The risk-based capital level is the sum of the following amounts: (a) Credit and interest rate risk....

  18. 12 CFR 652.70 - Risk-based capital level.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Risk-based capital level. 652.70 Section 652.70... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.70 Risk-based capital level. The risk-based capital level is the sum of the following amounts: (a) Credit and interest rate risk....

  19. 12 CFR 652.70 - Risk-based capital level.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Risk-based capital level. 652.70 Section 652.70... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.70 Risk-based capital level. The risk-based capital level is the sum of the following amounts: (a) Credit and interest rate risk....

  20. 12 CFR 652.70 - Risk-based capital level.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Risk-based capital level. 652.70 Section 652.70... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.70 Risk-based capital level. The risk-based capital level is the sum of the following amounts: (a) Credit and interest rate risk....

  1. 12 CFR 652.70 - Risk-based capital level.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Risk-based capital level. 652.70 Section 652.70... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.70 Risk-based capital level. The risk-based capital level is the sum of the following amounts: (a) Credit and interest rate risk....

  2. Impact of an integrated treatment algorithm based on platelet function testing and clinical risk assessment: results of the TRIAGE Patients Undergoing Percutaneous Coronary Interventions To Improve Clinical Outcomes Through Optimal Platelet Inhibition study.

    PubMed

    Chandrasekhar, Jaya; Baber, Usman; Mehran, Roxana; Aquino, Melissa; Sartori, Samantha; Yu, Jennifer; Kini, Annapoorna; Sharma, Samin; Skurk, Carsten; Shlofmitz, Richard A; Witzenbichler, Bernhard; Dangas, George

    2016-08-01

    Assessment of platelet reactivity alone for thienopyridine selection with percutaneous coronary intervention (PCI) has not been associated with improved outcomes. In TRIAGE, a prospective multicenter observational pilot study we sought to evaluate the benefit of an integrated algorithm combining clinical risk and platelet function testing to select type of thienopyridine in patients undergoing PCI. Patients on chronic clopidogrel therapy underwent platelet function testing prior to PCI using the VerifyNow assay to determine high on treatment platelet reactivity (HTPR, ≥230 P2Y12 reactivity units or PRU). Based on both PRU and clinical (ischemic and bleeding) risks, patients were switched to prasugrel or continued on clopidogrel per the study algorithm. The primary endpoints were (i) 1-year major adverse cardiovascular events (MACE) composite of death, non-fatal myocardial infarction, or definite or probable stent thrombosis; and (ii) major bleeding, Bleeding Academic Research Consortium type 2, 3 or 5. Out of 318 clopidogrel treated patients with a mean age of 65.9 ± 9.8 years, HTPR was noted in 33.3 %. Ninety (28.0 %) patients overall were switched to prasugrel and 228 (72.0 %) continued clopidogrel. The prasugrel group had fewer smokers and more patients with heart failure. At 1-year MACE occurred in 4.4 % of majority HTPR patients on prasugrel versus 3.5 % of primarily non-HTPR patients on clopidogrel (p = 0.7). Major bleeding (5.6 vs 7.9 %, p = 0.47) was numerically higher with clopidogrel compared with prasugrel. Use of the study clinical risk algorithm for choice and intensity of thienopyridine prescription following PCI resulted in similar ischemic outcomes in HTPR patients receiving prasugrel and primarily non-HTPR patients on clopidogrel without an untoward increase in bleeding with prasugrel. However, the study was prematurely terminated and these findings are therefore hypothesis generating. PMID:27100112

  3. Optimal Hedge for Nodal Price Risk using FTR

    NASA Astrophysics Data System (ADS)

    Tanaka, Hiroaki; Makino, Michiko; Ichida, Yoshio; Akiyoshi, Masanori

    As the deregulation of electric business proceeds, each company needs to construct a risk hedging system. So far many companies have not been taking much care of this suffciently. In this paper, we address the nodal price hedge issue. Most companies have risks for the nodal prices which tend to be highly volatile. There's almost no doubt that such a company actually needs hedge products to make profits stable. We suggest the usage of FTR for this purpose. First, we briefly note the mechanisms of nodal price in PJM market and FTR, and suggest the mathematical formulations. Then we show some numerical examples and discuss our findings.

  4. Optimal separable bases and molecular collisions

    SciTech Connect

    Poirier, L W

    1997-12-01

    A new methodology is proposed for the efficient determination of Green`s functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, are problems of reduced dimensionality for most systems of physical interest. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. These distorted waves give rise to a Born series with optimized convergence properties. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic oscillator system. The primary interest however, is quantum reactive scattering in molecular systems. For numerical calculations, the use of distorted waves corresponds to numerical preconditioning. The new methodology therefore gives rise to an optimized preconditioning scheme for the efficient calculation of reactive and inelastic scattering amplitudes, especially at intermediate energies. This scheme is particularly suited to discrete variable representations (DVR`s) and iterative sparse matrix methods commonly employed in such calculations. State to state and cumulative reactive scattering results obtained via the optimized preconditioner are presented for the two-dimensional collinear H + H{sub 2} {yields} H{sub 2} + H system. Computational time and memory requirements for this system are drastically reduced in comparison with other methods, and results are obtained for previously prohibitive energy regimes.

  5. Optimization of agricultural field workability predictions for improved risk management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Risks introduced by weather variability are key considerations in agricultural production. The sensitivity of agriculture to weather variability is of special concern in the face of climate change. In particular, the availability of workable days is an important consideration in agricultural practic...

  6. Nonlinear Inertia Weighted Teaching-Learning-Based Optimization for Solving Global Optimization Problem

    PubMed Central

    Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru

    2015-01-01

    Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well. PMID:26421005

  7. Optimal predator risk assessment by the sonar-jamming arctiine moth Bertholdia trigona.

    PubMed

    Corcoran, Aaron J; Wagner, Ryan D; Conner, William E

    2013-01-01

    Nearly all animals face a tradeoff between seeking food and mates and avoiding predation. Optimal escape theory holds that an animal confronted with a predator should only flee when benefits of flight (increased survival) outweigh the costs (energetic costs, lost foraging time, etc.). We propose a model for prey risk assessment based on the predator's stage of attack. Risk level should increase rapidly from when the predator detects the prey to when it commits to the attack. We tested this hypothesis using a predator--the echolocating bat--whose active biosonar reveals its stage of attack. We used a prey defense--clicking used for sonar jamming by the tiger moth Bertholdia trigona--that can be readily studied in the field and laboratory and is enacted simultaneously with evasive flight. We predicted that prey employ defenses soon after being detected and targeted, and that prey defensive thresholds discriminate between legitimate predatory threats and false threats where a nearby prey is attacked. Laboratory and field experiments using playbacks of ultrasound signals and naturally behaving bats, respectively, confirmed our predictions. Moths clicked soon after bats detected and targeted them. Also, B. trigona clicking thresholds closely matched predicted optimal thresholds for discriminating legitimate and false predator threats for bats using search and approach phase echolocation--the period when bats are searching for and assessing prey. To our knowledge, this is the first quantitative study to correlate the sensory stimuli that trigger defensive behaviors with measurements of signals provided by predators during natural attacks in the field. We propose theoretical models for explaining prey risk assessment depending on the availability of cues that reveal a predator's stage of attack. PMID:23671686

  8. Optimal Predator Risk Assessment by the Sonar-Jamming Arctiine Moth Bertholdia trigona

    PubMed Central

    Corcoran, Aaron J.; Wagner, Ryan D.; Conner, William E.

    2013-01-01

    Nearly all animals face a tradeoff between seeking food and mates and avoiding predation. Optimal escape theory holds that an animal confronted with a predator should only flee when benefits of flight (increased survival) outweigh the costs (energetic costs, lost foraging time, etc.). We propose a model for prey risk assessment based on the predator's stage of attack. Risk level should increase rapidly from when the predator detects the prey to when it commits to the attack. We tested this hypothesis using a predator – the echolocating bat – whose active biosonar reveals its stage of attack. We used a prey defense – clicking used for sonar jamming by the tiger moth Bertholdia trigona– that can be readily studied in the field and laboratory and is enacted simultaneously with evasive flight. We predicted that prey employ defenses soon after being detected and targeted, and that prey defensive thresholds discriminate between legitimate predatory threats and false threats where a nearby prey is attacked. Laboratory and field experiments using playbacks of ultrasound signals and naturally behaving bats, respectively, confirmed our predictions. Moths clicked soon after bats detected and targeted them. Also, B. trigona clicking thresholds closely matched predicted optimal thresholds for discriminating legitimate and false predator threats for bats using search and approach phase echolocation – the period when bats are searching for and assessing prey. To our knowledge, this is the first quantitative study to correlate the sensory stimuli that trigger defensive behaviors with measurements of signals provided by predators during natural attacks in the field. We propose theoretical models for explaining prey risk assessment depending on the availability of cues that reveal a predator's stage of attack. PMID:23671686

  9. Mice can count and optimize count-based decisions.

    PubMed

    Çavdaroğlu, Bilgehan; Balcı, Fuat

    2016-06-01

    Previous studies showed that rats and pigeons can count their responses, and the resultant count-based judgments exhibit the scalar property (also known as Weber's Law), a psychophysical property that also characterizes interval-timing behavior. Animals were found to take a nearly normative account of these well-established endogenous uncertainty characteristics in their time-based decision-making. On the other hand, no study has yet tested the implications of scalar property of numerosity representations for reward-rate maximization in count-based decision-making. The current study tested mice on a task that required them to press one lever for a minimum number of times before pressing the second lever to collect the armed reward (fixed consecutive number schedule, FCN). Fewer than necessary number of responses reset the response count without reinforcement, whereas emitting responses at least for the minimum number of times reset the response counter with reinforcement. Each mouse was tested with three different FCN schedules (FCN10, FCN20, FCN40). The number of responses emitted on the first lever before pressing the second lever constituted the main unit of analysis. Our findings for the first time showed that mice count their responses with scalar property. We then defined the reward-rate maximizing numerical decision strategies in this task based on the subject-based estimates of the endogenous counting uncertainty. Our results showed that mice learn to maximize the reward-rate by incorporating the uncertainty in their numerosity judgments into their count-based decisions. Our findings extend the scope of optimal temporal risk-assessment to the domain of count-based decision-making. PMID:26463617

  10. CFD Optimization on Network-Based Parallel Computer System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson H.; Holst, Terry L. (Technical Monitor)

    1994-01-01

    Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.

  11. An approximation based global optimization strategy for structural synthesis

    NASA Technical Reports Server (NTRS)

    Sepulveda, A. E.; Schmit, L. A.

    1991-01-01

    A global optimization strategy for structural synthesis based on approximation concepts is presented. The methodology involves the solution of a sequence of highly accurate approximate problems using a global optimization algorithm. The global optimization algorithm implemented consists of a branch and bound strategy based on the interval evaluation of the objective function and constraint functions, combined with a local feasible directions algorithm. The approximate design optimization problems are constructed using first order approximations of selected intermediate response quantities in terms of intermediate design variables. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure setforth.

  12. Defining a region of optimization based on engine usage data

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-08-04

    Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.

  13. A seismic risk for the lunar base

    NASA Technical Reports Server (NTRS)

    Oberst, Juergen; Nakamura, Yosio

    1992-01-01

    Shallow moonquakes, which were discovered during observations following the Apollo lunar landing missions, may pose a threat to lunar surface operations. The nature of these moonquakes is similar to that of intraplate earthquakes, which include infrequent but destructive events. Therefore, there is a need for detailed study to assess the possible seismic risk before establishing a lunar base.

  14. Optimal trajectories based on linear equations

    NASA Technical Reports Server (NTRS)

    Carter, Thomas E.

    1990-01-01

    The Principal results of a recent theory of fuel optimal space trajectories for linear differential equations are presented. Both impulsive and bounded-thrust problems are treated. A new form of the Lawden Primer vector is found that is identical for both problems. For this reason, starting iteratives from the solution of the impulsive problem are highly effective in the solution of the two-point boundary-value problem associated with bounded thrust. These results were applied to the problem of fuel optimal maneuvers of a spacecraft near a satellite in circular orbit using the Clohessy-Wiltshire equations. For this case two-point boundary-value problems were solved using a microcomputer, and optimal trajectory shapes displayed. The results of this theory can also be applied if the satellite is in an arbitrary Keplerian orbit through the use of the Tschauner-Hempel equations. A new form of the solution of these equations has been found that is identical for elliptical, parabolic, and hyperbolic orbits except in the way that a certain integral is evaluated. For elliptical orbits this integral is evaluated through the use of the eccentric anomaly. An analogous evaluation is performed for hyperbolic orbits.

  15. Probability-based least square support vector regression metamodeling technique for crashworthiness optimization problems

    NASA Astrophysics Data System (ADS)

    Wang, Hu; Li, Enying; Li, G. Y.

    2011-03-01

    This paper presents a crashworthiness design optimization method based on a metamodeling technique. The crashworthiness optimization is a highly nonlinear and large scale problem, which is composed various nonlinearities, such as geometry, material and contact and needs a large number expensive evaluations. In order to obtain a robust approximation efficiently, a probability-based least square support vector regression is suggested to construct metamodels by considering structure risk minimization. Further, to save the computational cost, an intelligent sampling strategy is applied to generate sample points at the stage of design of experiment (DOE). In this paper, a cylinder, a full vehicle frontal collision is involved. The results demonstrate that the proposed metamodel-based optimization is efficient and effective in solving crashworthiness, design optimization problems.

  16. Laboratory quality control based on risk management.

    PubMed

    Nichols, James H

    2011-01-01

    Risk management is the systematic application of management policies, procedures, and practices to the tasks of analyzing, evaluating, controlling and monitoring risk (the effect of uncertainty on objectives). Clinical laboratories conduct a number of activities that could be considered risk management including verification of performance of new tests, troubleshooting instrument problems and responding to physician complaints. Development of a quality control plan for a laboratory test requires a process map of the testing process with consideration for weak steps in the preanalytic, analytic and postanalytic phases of testing where there is an increased probability of errors. Control processes that either prevent or improve the detection of errors can be implemented at these weak points in the testing process to enhance the overall quality of the test result. This manuscript is based on a presentation at the 2nd International Symposium on Point of Care Testing held at King Faisal Specialist Hospital in Riyadh, Saudi Arabia on October 12-13, 2010. Risk management principles will be reviewed and progress towards adopting a new Clinical and Laboratory Standards Institute Guideline for developing laboratory quality control plans based on risk management will be discussed. PMID:21623049

  17. Pediatric appendectomy: optimal surgical timing and risk assessment.

    PubMed

    Burjonrappa, Sathyaprasad; Rachel, Dana

    2014-05-01

    Appendicitis is one of the most common pediatric surgical problems. In the older surgical paradigm, appendectomy was considered to be an emergent procedure; however, with changes to resident work hours and other economic factors, the operation has evolved into an urgent and deliberately planned intervention. This paradigm shift in care has not necessarily seen universal buy-in by all stakeholders. Skeptics worry about the higher incidence of complications, particularly intra-abdominal abscess (IAA), associated with the delay to appendectomy with this strategy. Development of IAA after pediatric appendectomy greatly burdens the healthcare system, incapacitates patients, and limits family functionality. The risk factors that influence the development of IAA after appendectomy were evaluated in 220 children admitted to a large urban teaching hospital over a recent 1.5-year period. Preoperative risk factors included in the study were age, sex, weight, ethnicity, duration and nature of symptoms, white cell count, and ultrasound or computed tomography scan findings (appendicolith, peritoneal fluid, abscess, phlegmon), failed nonoperative management, antibiotics administered, and timing. Intraoperative factors included were timing of appendectomy, surgical and pathological findings of perforation, open or laparoscopic procedure, and use of staple or Endoloop to ligate the appendix. Postoperative factors included were duration and type of antibiotic therapy. There were 94 (43%) perforated and 126 (57%) nonperforated appendicitis during the study period. The incidence of postoperative IAA was 4.5 per cent (nine of 220). Children operated on after overnight antibiotics and resuscitation had a significantly lower risk of IAA as compared with children managed by other strategies (P < 0.0003). Of the preoperative factors, only the presence of a fever in the emergency department (P < 0.001) and identification of complicated appendicitis on imaging (P < 0.0001) were significant

  18. An Optimization Method for Condition Based Maintenance of Aircraft Fleet Considering Prognostics Uncertainty

    PubMed Central

    Chen, Yiran; Sun, Bo; Li, Songjie

    2014-01-01

    An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success. PMID:24892046

  19. 13 CFR 120.1000 - Risk-Based Lender Oversight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Risk-Based Lender Oversight. 120.1000 Section 120.1000 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Risk-Based Lender Oversight Supervision § 120.1000 Risk-Based Lender Oversight. (a) Risk-Based...

  20. 13 CFR 120.1000 - Risk-Based Lender Oversight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Risk-Based Lender Oversight. 120.1000 Section 120.1000 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Risk-Based Lender Oversight Supervision § 120.1000 Risk-Based Lender Oversight. (a) Risk-Based...

  1. 13 CFR 120.1000 - Risk-Based Lender Oversight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Risk-Based Lender Oversight. 120.1000 Section 120.1000 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Risk-Based Lender Oversight Supervision § 120.1000 Risk-Based Lender Oversight. (a) Risk-Based...

  2. 13 CFR 120.1000 - Risk-Based Lender Oversight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Risk-Based Lender Oversight. 120.1000 Section 120.1000 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Risk-Based Lender Oversight Supervision § 120.1000 Risk-Based Lender Oversight. (a) Risk-Based...

  3. 13 CFR 120.1000 - Risk-Based Lender Oversight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Risk-Based Lender Oversight. 120.1000 Section 120.1000 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Risk-Based Lender Oversight Supervision § 120.1000 Risk-Based Lender Oversight. (a) Risk-Based...

  4. Ant colony optimization-based firewall anomaly mitigation engine.

    PubMed

    Penmatsa, Ravi Kiran Varma; Vatsavayi, Valli Kumari; Samayamantula, Srinivas Kumar

    2016-01-01

    A firewall is the most essential component of network perimeter security. Due to human error and the involvement of multiple administrators in configuring firewall rules, there exist common anomalies in firewall rulesets such as Shadowing, Generalization, Correlation, and Redundancy. There is a need for research on efficient ways of resolving such anomalies. The challenge is also to see that the reordered or resolved ruleset conforms to the organization's framed security policy. This study proposes an ant colony optimization (ACO)-based anomaly resolution and reordering of firewall rules called ACO-based firewall anomaly mitigation engine. Modified strategies are also introduced to automatically detect these anomalies and to minimize manual intervention of the administrator. Furthermore, an adaptive reordering strategy is proposed to aid faster reordering when a new rule is appended. The proposed approach was tested with different firewall policy sets. The results were found to be promising in terms of the number of conflicts resolved, with minimal availability loss and marginal security risk. This work demonstrated the application of a metaheuristic search technique, ACO, in improving the performance of a packet-filter firewall with respect to mitigating anomalies in the rules, and at the same time demonstrated conformance to the security policy. PMID:27441151

  5. Optimized entanglement purification schemes for modular based quantum computers

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    The choice of entanglement purification scheme strongly depends on the fidelities of quantum gates and measurements, as well as the imperfection of initial entanglement. For instance, the purification scheme optimal at low gate fidelities may not necessarily be the optimal scheme at higher gate fidelities. We employ an evolutionary algorithm that efficiently optimizes the entanglement purification circuit for given system parameters. Such optimized purification schemes will boost the performance of entanglement purification, and consequently enhance the fidelity of teleportation-based non-local coupling gates, which is an indispensible building block for modular-based quantum computers. In addition, we study how these optimized purification schemes affect the resource overhead caused by error correction in modular based quantum computers.

  6. Credit risk evaluation based on social media.

    PubMed

    Yang, Yang; Gu, Jing; Zhou, Zongfang

    2016-07-01

    Social media has been playing an increasingly important role in the sharing of individuals' opinions on many financial issues, including credit risk in investment decisions. This paper analyzes whether these opinions, which are transmitted through social media, can accurately predict enterprises' future credit risk. We consider financial statements oriented evaluation results based on logit and probit approaches as the benchmarks. We then conduct textual analysis to retrieve both posts and their corresponding commentaries published on two of the most popular social media platforms for financial investors in China. Professional advice from financial analysts is also investigated in this paper. We surprisingly find that the opinions extracted from both posts and commentaries surpass opinions of analysts in terms of credit risk prediction. PMID:26739372

  7. Optimal fractional order PID design via Tabu Search based algorithm.

    PubMed

    Ateş, Abdullah; Yeroglu, Celaleddin

    2016-01-01

    This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method. PMID:26652128

  8. Optimization Research of Generation Investment Based on Linear Programming Model

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  9. Optimization and analysis of a CFJ-airfoil using adaptive meta-model based design optimization

    NASA Astrophysics Data System (ADS)

    Whitlock, Michael D.

    Although strong potential for Co-Flow Jet (CFJ) flow separation control system has been demonstrated in existing literature, there has been little effort applied towards the optimization of the design for a given application. The high dimensional design space makes any optimization computationally intensive. This work presents the optimization of a CFJ airfoil as applied to a low Reynolds Number regimen using meta-model based design optimization (MBDO). The approach consists of computational fluid dynamics (CFD) analysis coupled with a surrogate model derived using Kriging. A genetic algorithm (GA) is then used to perform optimization on the efficient surrogate model. MBDO was shown to be an effective and efficient approach to solving the CFJ design problem. The final solution set was found to decrease drag by 100% while increasing lift by 42%. When validated, the final solution was found to be within one standard deviation of the CFD model it was representing.

  10. Breast Cancer-Related Arm Lymphedema: Incidence Rates, Diagnostic Techniques, Optimal Management and Risk Reduction Strategies

    SciTech Connect

    Shah, Chirag; Vicini, Frank A.

    2011-11-15

    As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.

  11. RISK AND RISK ASSESSMENT IN WATER-BASED RECREATION

    EPA Science Inventory

    The great number of individuals using recreational water resources presents a challenge with regard to protecting the health of these recreationists. Risk assessment provides a framework for characterizing the risk associated with exposure to microbial hazards and for managing r...

  12. Performance optimization of web-based medical simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2013-01-01

    This paper presents a technique for performance optimization of multimodal interactive web-based medical simulation. A web-based simulation framework is promising for easy access and wide dissemination of medical simulation. However, the real-time performance of the simulation highly depends on hardware capability on the client side. Providing consistent simulation in different hardware is critical for reliable medical simulation. This paper proposes a non-linear mixed integer programming model to optimize the performance of visualization and physics computation while considering hardware capability and application specific constraints. The optimization model identifies and parameterizes the rendering and computing capabilities of the client hardware using an exploratory proxy code. The parameters are utilized to determine the optimized simulation conditions including texture sizes, mesh sizes and canvas resolution. The test results show that the optimization model not only achieves a desired frame per second but also resolves visual artifacts due to low performance hardware. PMID:23400151

  13. Practice management based on risk assessment.

    PubMed

    Sandberg, Hans

    2004-01-01

    The management of a dental practice is most often focused on what clinicians do (production of items), and not so much on what is achieved in terms of oral health. The main reason for this is probably that it is easier to measure production and more difficult to measure health outcome. This paper presents a model based on individual risk assessment that aims to achieve a financially sound economy and good oral health. The close-to-the-clinic management tool, the HIDEP Model (Health Improvement in a DEntal Practice) was pioneered initially in Sweden at the end of 1980s. The experience over a 15-year period with different elements of the model is presented, including: the basis of examination and risk assessment; motivation; task delegation and leadership issues; health-finance evaluations; and quality development within a dental clinic. DentiGroupXL, a software program designed to support the work based on the model, is also described. PMID:15646588

  14. Risk-based Classification of Incidents

    NASA Technical Reports Server (NTRS)

    Greenwell, William S.; Knight, John C.; Strunk, Elisabeth A.

    2003-01-01

    As the penetration of software into safety-critical systems progresses, accidents and incidents involving software will inevitably become more frequent. Identifying lessons from these occurrences and applying them to existing and future systems is essential if recurrences are to be prevented. Unfortunately, investigative agencies do not have the resources to fully investigate every incident under their jurisdictions and domains of expertise and thus must prioritize certain occurrences when allocating investigative resources. In the aviation community, most investigative agencies prioritize occurrences based on the severity of their associated losses, allocating more resources to accidents resulting in injury to passengers or extensive aircraft damage. We argue that this scheme is inappropriate because it undervalues incidents whose recurrence could have a high potential for loss while overvaluing fairly straightforward accidents involving accepted risks. We then suggest a new strategy for prioritizing occurrences based on the risk arising from incident recurrence.

  15. A Risk-Based Sensor Placement Methodology

    SciTech Connect

    Lee, Ronald W; Kulesz, James J

    2006-08-01

    A sensor placement methodology is proposed to solve the problem of optimal location of sensors or detectors to protect population against the exposure to and effects of known and/or postulated chemical, biological, and/or radiological threats. Historical meteorological data are used to characterize weather conditions as wind speed and direction pairs with the percentage of occurrence of the pairs over the historical period. The meteorological data drive atmospheric transport and dispersion modeling of the threats, the results of which are used to calculate population at risk against standard exposure levels. Sensor locations are determined via a dynamic programming algorithm where threats captured or detected by sensors placed in prior stages are removed from consideration in subsequent stages. Moreover, the proposed methodology provides a quantification of the marginal utility of each additional sensor or detector. Thus, the criterion for halting the iterative process can be the number of detectors available, a threshold marginal utility value, or the cumulative detection of a minimum factor of the total risk value represented by all threats.

  16. 78 FR 76521 - Risk-Based Capital Guidelines; Market Risk

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-18

    ...\\ 78 FR 43829 (July 22, 2013). The Board's current market risk rule is at 12 CFR parts 208 and 225...) published a final rule on August 30, 2012 to revise their respective market risk rules (77 FR 53059 (August... proposed. \\5\\ 78 FR 62017 (October 11, 2013). II. Description of the Final Market Risk Rule A....

  17. 76 FR 1889 - Risk-Based Capital Guidelines: Market Risk

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-11

    ...The Office of the Comptroller of the Currency (OCC), Board of Governors of the Federal Reserve System (Board), and Federal Deposit Insurance Corporation (FDIC) are requesting comment on a proposal to revise their market risk capital rules to modify their scope to better capture positions for which the market risk capital rules are appropriate; reduce procyclicality in market risk capital......

  18. 77 FR 53059 - Risk-Based Capital Guidelines: Market Risk

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ...The Office of the Comptroller of the Currency (OCC), Board of Governors of the Federal Reserve System (Board), and Federal Deposit Insurance Corporation (FDIC) are revising their market risk capital rules to better capture positions for which the market risk capital rules are appropriate; reduce procyclicality; enhance the rules' sensitivity to risks that are not adequately captured under......

  19. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    PubMed

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management. PMID:25622333

  20. Optimization of a photovoltaic pumping system based on the optimal control theory

    SciTech Connect

    Betka, A.; Attali, A.

    2010-07-15

    This paper suggests how an optimal operation of a photovoltaic pumping system based on an induction motor driving a centrifugal pump can be realized. The optimization problem consists in maximizing the daily pumped water quantity via the optimization of the motor efficiency for every operation point. The proposed structure allows at the same time the minimization the machine losses, the field oriented control and the maximum power tracking of the photovoltaic array. This will be attained based on multi-input and multi-output optimal regulator theory. The effectiveness of the proposed algorithm is described by simulation and the obtained results are compared to those of a system working with a constant air gap flux. (author)

  1. Teaching-learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems

    NASA Astrophysics Data System (ADS)

    Rao, R. V.; Savsani, V. J.; Balic, J.

    2012-12-01

    An efficient optimization algorithm called teaching-learning-based optimization (TLBO) is proposed in this article to solve continuous unconstrained and constrained optimization problems. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The algorithm is tested on 25 different unconstrained benchmark functions and 35 constrained benchmark functions with different characteristics. For the constrained benchmark functions, TLBO is tested with different constraint handling techniques such as superiority of feasible solutions, self-adaptive penalty, ɛ-constraint, stochastic ranking and ensemble of constraints. The performance of the TLBO algorithm is compared with that of other optimization algorithms and the results show the better performance of the proposed algorithm.

  2. Optimal hip fracture management in high-risk frail older adults.

    PubMed

    McNicoll, Lynn; Fitzgibbons, Peter G

    2009-07-01

    Management of high-risk hip fracture patients is complicated. The optimal surgical decision must be individualized and made promptly, with the assistance of all important team members, including primary care doctors, patient, family, and the orthopedic team. The risks of delaying surgery are significant and should be avoided if possible. Strategies for improving outcomes in these patients include collaborations with medicine and delirium prevention protocols, especially with early ambulation. PMID:19685643

  3. Optimizing ring-based CSR sources

    SciTech Connect

    Byrd, J.M.; De Santis, S.; Hao, Z.; Martin, M.C.; Munson, D.V.; Li, D.; Nis himura, H.; Robin, D.S.; Sannibale, F.; Schlueter, R.D.; Schoenlein, R.; Jung, J.Y.; Venturini, M.; Wan, W.; Zholents, A.A.; Zolotorev, M.

    2004-01-01

    Coherent synchrotron radiation (CSR) is a fascinating phenomenon recently observed in electron storage rings and shows tremendous promise as a high power source of radiation at terahertz frequencies. However, because of the properties of the radiation and the electron beams needed to produce it, there are a number of interesting features of the storage ring that can be optimized for CSR. Furthermore, CSR has been observed in three distinct forms: as steady pulses from short bunches, bursts from growth of spontaneous modulations in high current bunches, and from micro modulations imposed on a bunch from laser slicing. These processes have their relative merits as sources and can be improved via the ring design. The terahertz (THz) and sub-THz region of the electromagnetic spectrum lies between the infrared and the microwave . This boundary region is beyond the normal reach of optical and electronic measurement techniques and sources associated with these better-known neighbors. Recent research has demonstrated a relatively high power source of THz radiation from electron storage rings: coherent synchrotron radiation (CSR). Besides offering high power, CSR enables broadband optical techniques to be extended to nearly the microwave region, and has inherently sub-picosecond pulses. As a result, new opportunities for scientific research and applications are enabled across a diverse array of disciplines: condensed matter physics, medicine, manufacturing, and space and defense industries. CSR will have a strong impact on THz imaging, spectroscopy, femtosecond dynamics, and driving novel non-linear processes. CSR is emitted by bunches of accelerated charged particles when the bunch length is shorter than the wavelength being emitted. When this criterion is met, all the particles emit in phase, and a single-cycle electromagnetic pulse results with an intensity proportional to the square of the number of particles in the bunch. It is this quadratic dependence that can

  4. Data mining and tree-based optimization

    SciTech Connect

    Grossman, R.; Bodek, H.; Northcutt, D.; Poor, V.

    1996-12-31

    Consider a large collection of objects, each of which has a large number of attributes of several different sorts. We assume that there are data attributes representing data, attributes which are to be statistically estimated or predicted from these, and attributes which can be controlled or set. A motivating example is to assign a credit score to a credit card prospect indicating the likelihood that the prospect will make credit card payments and then to set a credit limit for each prospect in such a way as to maximize the over-all expected revenue from the entire collection of prospects. In the terminology above, the credit score is called a predictive attribute and the credit limit a control attribute. The methodology we describe in the paper uses data mining to provide more accurate estimates of the predictive attributes and to provide more optimal settings of the control attributes. We briefly describe how to parallelize these computations. We also briefly comment on some of data management issues which arise for these types of problems in practice. We propose using object warehouses to provide low overhead, high performance access to large collections of objects as an underlying foundation for our data mining algorithms.

  5. Game theory and risk-based leveed river system planning with noncooperation

    NASA Astrophysics Data System (ADS)

    Hui, Rui; Lund, Jay R.; Madani, Kaveh

    2016-01-01

    Optimal risk-based levee designs are usually developed for economic efficiency. However, in river systems with multiple levees, the planning and maintenance of different levees are controlled by different agencies or groups. For example, along many rivers, levees on opposite riverbanks constitute a simple leveed river system with each levee designed and controlled separately. Collaborative planning of the two levees can be economically optimal for the whole system. Independent and self-interested landholders on opposite riversides often are willing to separately determine their individual optimal levee plans, resulting in a less efficient leveed river system from an overall society-wide perspective (the tragedy of commons). We apply game theory to simple leveed river system planning where landholders on each riverside independently determine their optimal risk-based levee plans. Outcomes from noncooperative games are analyzed and compared with the overall economically optimal outcome, which minimizes net flood cost system-wide. The system-wide economically optimal solution generally transfers residual flood risk to the lower-valued side of the river, but is often impractical without compensating for flood risk transfer to improve outcomes for all individuals involved. Such compensation can be determined and implemented with landholders' agreements on collaboration to develop an economically optimal plan. By examining iterative multiple-shot noncooperative games with reversible and irreversible decisions, the costs of myopia for the future in making levee planning decisions show the significance of considering the externalities and evolution path of dynamic water resource problems to improve decision-making.

  6. Performance- and risk-based regulation

    SciTech Connect

    Sauter, G.D.

    1994-12-31

    Risk-based regulation (RBR) and performance-based regulation (PBR) are two relatively new concepts for the regulation of nuclear reactor power plants by the U.S. Nuclear Regulatory Commission (NRC). Although RBR and PBR are often considered to be somewhat equivalent, they, in fact, address two fundamentally different regulatory questions. To fruitfully discuss these two concepts, it is important to recognize what each entails. This paper identifies those two fundamental questions and discusses how they are addressed by RBR and PBR.

  7. 12 CFR 167.6 - Risk-based capital credit risk-weight categories.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Risk-based capital credit risk-weight categories. 167.6 Section 167.6 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY CAPITAL Regulatory Capital Requirements § 167.6 Risk-based capital credit risk-weight categories. (a) Risk-weighted assets. Risk-weighted assets...

  8. 12 CFR 167.6 - Risk-based capital credit risk-weight categories.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Risk-based capital credit risk-weight categories. 167.6 Section 167.6 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY CAPITAL Regulatory Capital Requirements § 167.6 Risk-based capital credit risk-weight categories. (a) Risk-weighted assets. Risk-weighted assets...

  9. 12 CFR 567.6 - Risk-based capital credit risk-weight categories.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 6 2013-01-01 2012-01-01 true Risk-based capital credit risk-weight categories. 567.6 Section 567.6 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY CAPITAL Regulatory Capital Requirements § 567.6 Risk-based capital credit risk-weight categories. (a) Risk-weighted assets. Risk-weighted assets...

  10. Optimal policy for value-based decision-making

    PubMed Central

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-01-01

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638

  11. Optimal policy for value-based decision-making.

    PubMed

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-01-01

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638

  12. Anatomy-Based Inverse Planning Simulated Annealing Optimization in High-Dose-Rate Prostate Brachytherapy: Significant Dosimetric Advantage Over Other Optimization Techniques

    SciTech Connect

    Jacob, Dayee Raben, Adam; Sarkar, Abhirup; Grimm, Jimm; Simpson, Larry

    2008-11-01

    Purpose: To perform an independent validation of an anatomy-based inverse planning simulated annealing (IPSA) algorithm in obtaining superior target coverage and reducing the dose to the organs at risk. Method and Materials: In a recent prostate high-dose-rate brachytherapy protocol study by the Radiation Therapy Oncology Group (0321), our institution treated 20 patients between June 1, 2005 and November 30, 2006. These patients had received a high-dose-rate boost dose of 19 Gy to the prostate, in addition to an external beam radiotherapy dose of 45 Gy with intensity-modulated radiotherapy. Three-dimensional dosimetry was obtained for the following optimization schemes in the Plato Brachytherapy Planning System, version 14.3.2, using the same dose constraints for all the patients treated during this period: anatomy-based IPSA optimization, geometric optimization, and dose point optimization. Dose-volume histograms were generated for the planning target volume and organs at risk for each optimization method, from which the volume receiving at least 75% of the dose (V{sub 75%}) for the rectum and bladder, volume receiving at least 125% of the dose (V{sub 125%}) for the urethra, and total volume receiving the reference dose (V{sub 100%}) and volume receiving 150% of the dose (V{sub 150%}) for the planning target volume were determined. The dose homogeneity index and conformal index for the planning target volume for each optimization technique were compared. Results: Despite suboptimal needle position in some implants, the IPSA algorithm was able to comply with the tight Radiation Therapy Oncology Group dose constraints for 90% of the patients in this study. In contrast, the compliance was only 30% for dose point optimization and only 5% for geometric optimization. Conclusions: Anatomy-based IPSA optimization proved to be the superior technique and also the fastest for reducing the dose to the organs at risk without compromising the target coverage.

  13. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    NASA Astrophysics Data System (ADS)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto

  14. Two-level optimization of composite wing structures based on panel genetic optimization

    NASA Astrophysics Data System (ADS)

    Liu, Boyang

    load. The resulting response surface is used for wing-level optimization. In general, complex composite structures consist of several laminates. A common problem in the design of such structures is that some plies in the adjacent laminates terminate in the boundary between the laminates. These discontinuities may cause stress concentrations and may increase manufacturing difficulty and cost. We developed measures of continuity of two adjacent laminates. We studied tradeoffs between weight and continuity through a simple composite wing design. Finally, we compared the two-level optimization to a single-level optimization based on flexural lamination parameters. The single-level optimization is efficient and feasible for a wing consisting of unstiffened panels.

  15. Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT

    PubMed Central

    2009-01-01

    Background The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density overwrite one phase static CT, average CT) of the same patient. Both 6 and 15 MV beam energies were used. The resulting treatment plans were compared by how well they fulfilled the prescribed optimization constraints both for the dose distributions calculated on the static patient models and for the accumulated dose, recalculated with MC on each of 8 CTs of a 4DCT set. Results In the phantom measurements, the MC dose engine showed discrepancies < 2%, while the fsPB dose engine showed discrepancies of up to 8% in the presence of lateral electron disequilibrium in the target. In the patient plan optimization, this translates into violations of organ at risk constraints and unpredictable target doses for the fsPB optimized plans. For the 4D MC recalculated dose distribution, MC optimized plans always underestimate the target doses, but the organ at risk doses were comparable. The results depend on the static patient model, and the smallest discrepancy was found for the MC optimized plan on the density overwrite one phase static CT model. Conclusions It is feasible to employ the MC dose engine for optimization of lung IMSRT and the plans are superior to fsPB. Use of static patient models introduces a bias in the MC dose distribution compared to the 4D MC recalculated dose, but this bias is predictable and therefore MC based optimization on static patient models is considered safe. PMID:20003380

  16. Response-time optimization of rule-based expert systems

    NASA Astrophysics Data System (ADS)

    Zupan, Blaz; Cheng, Albert M. K.

    1994-03-01

    Real-time rule-based decision systems are embedded AI systems and must make critical decisions within stringent timing constraints. In the case where the response time of the rule- based system is not acceptable, it has to be optimized to meet both timing and integrity constraints. This paper describes a novel approach to reduce the response time of rule-based expert systems. Our optimization method is twofold: the first phase constructs the reduced cycle-free finite state transition system corresponding to the input rule-based system, and the second phase further refines the constructed transition system using the simulated annealing approach. The method makes use of rule-base system decomposition, concurrency, and state- equivalency. The new and optimized system is synthesized from the derived transition system. Compared with the original system, the synthesized system has fewer number of rule firings to reach the fixed point, is inherently stable, and has no redundant rules.

  17. A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Mohanty, Prases K.; Parhi, Dayal R.

    2014-12-01

    Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.

  18. A multiobjective memetic algorithm based on particle swarm optimization.

    PubMed

    Liu, Dasheng; Tan, K C; Goh, C K; Ho, W K

    2007-02-01

    In this paper, a new memetic algorithm (MA) for multiobjective (MO) optimization is proposed, which combines the global search ability of particle swarm optimization with a synchronous local search heuristic for directed local fine-tuning. A new particle updating strategy is proposed based upon the concept of fuzzy global-best to deal with the problem of premature convergence and diversity maintenance within the swarm. The proposed features are examined to show their individual and combined effects in MO optimization. The comparative study shows the effectiveness of the proposed MA, which produces solution sets that are highly competitive in terms of convergence, diversity, and distribution. PMID:17278557

  19. Genetic Algorithm Based Neural Networks for Nonlinear Optimization

    Energy Science and Technology Software Center (ESTSC)

    1994-09-28

    This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmore » the energy surface.« less

  20. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  1. Trading risk and performance for engineering design optimization using multifidelity analyses

    NASA Astrophysics Data System (ADS)

    Rajnarayan, Dev Gorur

    Computers pervade our lives today: from communication to calculation, their influence percolates many spheres of our existence. With continuing advances in computing, simulations are becoming increasingly complex and accurate. Powerful high-fidelity simulations mimic and predict a variety of real-life scenarios, with applications ranging from entertainment to engineering. The most accurate of such engineering simulations come at a high cost in terms of computing resources and time. Engineers use such simulations to predict the real-world performance of products they design; that is, they use them for analysis. Needless to say, the emphasis is on accuracy of the prediction. For such analysis, one would like to use the most accurate simulation available, and such a simulation is likely to be at the limits of available computing power, quite independently of advances in computing. In engineering design, however, the goal is somewhat different. Engineering design is generally posed as an optimization problem, where the goal is to tweak a set of available inputs or parameters, called design variables, to create a design that is optimal in some way, and meets some preset requirements. In other words, we would like modify the design variables in order to optimize some figure of merit, called an objective function, subject to a set of constraints, typically formulated as equations or inequalities to be satisfied. Typically, a complex engineering system such as an aircraft is described by thousands of design variables, all of which are optimized during the design process. Nevertheless, do we always need to use the highest-fidelity simulations as the objective function and constraints for engineering design? Or can we afford to use lower-fidelity simulations with appropriate corrections? In this thesis, we present a new methodology for surrogate-based optimization. Existing methods combine the possibility erroneous predictions of the low-fidelity surrogate with estimates of

  2. Knowledge-Based Optimization of Molecular Geometries Using Crystal Structures.

    PubMed

    Cole, Jason C; Groom, Colin R; Korb, Oliver; McCabe, Patrick; Shields, Gregory P

    2016-04-25

    This paper describes a novel way to use the structural information contained in the Cambridge Structural Database (CSD) to drive geometry optimization of organic molecules. We describe how CSD structural information is transformed into objective functions for gradient-based optimization to provide good quality geometries for a large variety of organic molecules. Performance is assessed by minimizing different sets of organic molecules reporting RMSD movements for bond lengths, valence angles, torsion angles, and heavy atom positions. PMID:26977906

  3. Optimal ''image-based'' weighting for energy-resolved CT

    SciTech Connect

    Schmidt, Taly Gilat

    2009-07-15

    This paper investigates a method of reconstructing images from energy-resolved CT data with negligible beam-hardening artifacts and improved contrast-to-nosie ratio (CNR) compared to conventional energy-weighting methods. Conceptually, the investigated method first reconstructs separate images from each energy bin. The final image is a linear combination of the energy-bin images, with the weights chosen to maximize the CNR in the final image. The optimal weight of a particular energy-bin image is derived to be proportional to the contrast-to-noise-variance ratio in that image. The investigated weighting method is referred to as ''image-based'' weighting, although, as will be described, the weights can be calculated and the energy-bin data combined prior to reconstruction. The performance of optimal image-based energy weighting with respect to CNR and beam-hardening artifacts was investigated through simulations and compared to that of energy integrating, photon counting, and previously studied optimal ''projection-based'' energy weighting. Two acquisitions were simulated: dedicated breast CT and a conventional thorax scan. The energy-resolving detector was simulated with five energy bins. Four methods of estimating the optimal weights were investigated, including task-specific and task-independent methods and methods that require a single reconstruction versus multiple reconstructions. Results demonstrated that optimal image-based weighting improved the CNR compared to energy-integrating weighting by factors of 1.15-1.6 depending on the task. Compared to photon-counting weighting, the CNR improvement ranged from 1.0 to 1.3. The CNR improvement factors were comparable to those of projection-based optimal energy weighting. The beam-hardening cupping artifact increased from 5.2% for energy-integrating weighting to 12.8% for optimal projection-based weighting, while optimal image-based weighting reduced the cupping to 0.6%. Overall, optimal image-based energy weighting

  4. Stackelberg Game of Buyback Policy in Supply Chain with a Risk-Averse Retailer and a Risk-Averse Supplier Based on CVaR

    PubMed Central

    Zhou, Yanju; Chen, Qian; Chen, Xiaohong; Wang, Zongrun

    2014-01-01

    This paper considers a decentralized supply chain in which a single supplier sells a perishable product to a single retailer facing uncertain demand. We assume that the supplier and the retailer are both risk averse and utilize Conditional Value at Risk (CVaR), a risk measure method which is popularized in financial risk management, to estimate their risk attitude. We establish a buyback policy model based on Stackelberg game theory under considering supply chain members' risk preference and get the expressions of the supplier's optimal repurchase price and the retailer's optimal order quantity which are compared with those under risk neutral case. Finally, a numerical example is applied to simulate that model and prove related conclusions. PMID:25247605

  5. Stackelberg game of buyback policy in supply chain with a risk-averse retailer and a risk-averse supplier based on CVaR.

    PubMed

    Zhou, Yanju; Chen, Qian; Chen, Xiaohong; Wang, Zongrun

    2014-01-01

    This paper considers a decentralized supply chain in which a single supplier sells a perishable product to a single retailer facing uncertain demand. We assume that the supplier and the retailer are both risk averse and utilize Conditional Value at Risk (CVaR), a risk measure method which is popularized in financial risk management, to estimate their risk attitude. We establish a buyback policy model based on Stackelberg game theory under considering supply chain members' risk preference and get the expressions of the supplier's optimal repurchase price and the retailer's optimal order quantity which are compared with those under risk neutral case. Finally, a numerical example is applied to simulate that model and prove related conclusions. PMID:25247605

  6. Trust regions in Kriging-based optimization with expected improvement

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  7. Demonstrating the benefits of template-based design-technology co-optimization

    NASA Astrophysics Data System (ADS)

    Liebmann, Lars; Hibbeler, Jason; Hieter, Nathaniel; Pileggi, Larry; Jhaveri, Tejas; Moe, Matthew; Rovner, Vyacheslav

    2010-03-01

    The concept of template-based design-technology co-optimization as a means of curbing escalating design complexity and increasing technology qualification risk is described. Data is presented highlighting the design efficacy of this proposal in terms of power, performance, and area benefits, quantifying the specific contributions of complex logic gates in this design optimization. Experimental results from 32nm technology node bulk CMOS wafers are presented to quantify the variability and design-margin reductions as well as yield and manufacturability improvements achievable with the proposed template-based design-technology co-optimization technique. The paper closes with data showing the predictable composability of individual templates, demonstrating a fundamental requirement of this proposal.

  8. Fatigue reliability based optimal design of planar compliant micropositioning stages

    NASA Astrophysics Data System (ADS)

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.

  9. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  10. Optimizing medical data quality based on multiagent web service framework.

    PubMed

    Wu, Ching-Seh; Khoury, Ibrahim; Shah, Hemant

    2012-07-01

    One of the most important issues in e-healthcare information systems is to optimize the medical data quality extracted from distributed and heterogeneous environments, which can extremely improve diagnostic and treatment decision making. This paper proposes a multiagent web service framework based on service-oriented architecture for the optimization of medical data quality in the e-healthcare information system. Based on the design of the multiagent web service framework, an evolutionary algorithm (EA) for the dynamic optimization of the medical data quality is proposed. The framework consists of two main components; first, an EA will be used to dynamically optimize the composition of medical processes into optimal task sequence according to specific quality attributes. Second, a multiagent framework will be proposed to discover, monitor, and report any inconstancy between the optimized task sequence and the actual medical records. To demonstrate the proposed framework, experimental results for a breast cancer case study are provided. Furthermore, to show the unique performance of our algorithm, a comparison with other works in the literature review will be presented. PMID:22614723

  11. Support vector machine based on adaptive acceleration particle swarm optimization.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  12. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the

  13. Optimal weight based on energy imbalance and utility maximization

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2016-01-01

    This paper investigates the optimal weight for both male and female using energy imbalance and utility maximization. Based on the difference of energy intake and expenditure, we develop a state equation that reveals the weight gain from this energy gap. We ​construct an objective function considering food consumption, eating habits and survival rate to measure utility. Through applying mathematical tools from optimal control methods and qualitative theory of differential equations, we obtain some results. For both male and female, the optimal weight is larger than the physiologically optimal weight calculated by the Body Mass Index (BMI). We also study the corresponding trajectories to steady state weight respectively. Depending on the value of a few parameters, the steady state can either be a saddle point with a monotonic trajectory or a focus with dampened oscillations.

  14. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem. PMID:27505357

  15. Aerodynamic Shape Optimization Based on Free-form Deformation

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2004-01-01

    This paper presents a free-form deformation technique suitable for aerodynamic shape optimization. Because the proposed technique is independent of grid topology, we can treat structured and unstructured computational fluid dynamics grids in the same manner. The proposed technique is an alternative shape parameterization technique to a trivariate volume technique. It retains the flexibility and freedom of trivariate volumes for CFD shape optimization, but it uses a bivariate surface representation. This reduces the number of design variables by an order of magnitude, and it provides much better control for surface shape changes. The proposed technique is simple, compact, and efficient. The analytical sensitivity derivatives are independent of the design variables and are easily computed for use in a gradient-based optimization. The paper includes the complete formulation and aerodynamics shape optimization results.

  16. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE PAGESBeta

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  17. Measurement matrix optimization method based on matrix orthogonal similarity transformation

    NASA Astrophysics Data System (ADS)

    Pan, Jinfeng

    2016-05-01

    Optimization of the measurement matrix is one of the important research aspects of compressive sensing theory. A measurement matrix optimization method is presented based on the orthogonal similarity transformation of the information operator's Gram matrix. In terms of the fact that the information operator's Gram matrix is a singular symmetric matrix, a simplified orthogonal similarity transformation is deduced, and thus the simplified diagonal matrix that is orthogonally similar to it is obtained. Then an approximation of the Gram matrix is obtained by letting all the nonzero diagonal entries of the simplified diagonal matrix equal their average value. Thus an optimized measurement matrix can be acquired according to its relationship with the information operator. Results of experiments show that the optimized measurement matrix compared to the random measurement matrix is less coherent with dictionaries. The relative signal recovery error also declines when the proposed measurement matrix is utilized.

  18. An Optimization-based Atomistic-to-Continuum Coupling Method

    SciTech Connect

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.

  19. Optimization algorithm based characterization scheme for tunable semiconductor lasers.

    PubMed

    Chen, Quanan; Liu, Gonghai; Lu, Qiaoyin; Guo, Weihua

    2016-09-01

    In this paper, an optimization algorithm based characterization scheme for tunable semiconductor lasers is proposed and demonstrated. In the process of optimization, the ratio between the power of the desired frequency and the power except of the desired frequency is used as the figure of merit, which approximately represents the side-mode suppression ratio. In practice, we use tunable optical band-pass and band-stop filters to obtain the power of the desired frequency and the power except of the desired frequency separately. With the assistance of optimization algorithms, such as the particle swarm optimization (PSO) algorithm, we can get stable operation conditions for tunable lasers at designated frequencies directly and efficiently. PMID:27607701

  20. Optimization of Designs for Nanotube-based Scanning Probes

    NASA Technical Reports Server (NTRS)

    Harik, V. M.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Optimization of designs for nanotube-based scanning probes, which may be used for high-resolution characterization of nanostructured materials, is examined. Continuum models to analyze the nanotube deformations are proposed to help guide selection of the optimum probe. The limitations on the use of these models that must be accounted for before applying to any design problem are presented. These limitations stem from the underlying assumptions and the expected range of nanotube loading, end conditions, and geometry. Once the limitations are accounted for, the key model parameters along with the appropriate classification of nanotube structures may serve as a basis for the design optimization of nanotube-based probe tips.

  1. Adjoint-based airfoil shape optimization in transonic flow

    NASA Astrophysics Data System (ADS)

    Gramanzini, Joe-Ray

    The primary focus of this work is efficient aerodynamic shape optimization in transonic flow. Adjoint-based optimization techniques are employed on airfoil sections and evaluated in terms of computational accuracy as well as efficiency. This study examines two test cases proposed by the AIAA Aerodynamic Design Optimization Discussion Group. The first is a two-dimensional, transonic, inviscid, non-lifting optimization of a Modified-NACA 0012 airfoil. The second is a two-dimensional, transonic, viscous optimization problem using a RAE 2822 airfoil. The FUN3D CFD code of NASA Langley Research Center is used as the ow solver for the gradient-based optimization cases. Two shape parameterization techniques are employed to study their effect and the number of design variables on the final optimized shape: Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD) and the BandAids free-form deformation technique. For the two airfoil cases, angle of attack is treated as a global design variable. The thickness and camber distributions are the local design variables for MASSOUD, and selected airfoil surface grid points are the local design variables for BandAids. Using the MASSOUD technique, a drag reduction of 72.14% is achieved for the NACA 0012 case, reducing the total number of drag counts from 473.91 to 130.59. Employing the BandAids technique yields a 78.67% drag reduction, from 473.91 to 99.98. The RAE 2822 case exhibited a drag reduction from 217.79 to 132.79 counts, a 39.05% decrease using BandAids.

  2. Entropy-based optimization of wavelet spatial filters.

    PubMed

    Farina, Darino; Kamavuako, Ernest Nlandu; Wu, Jian; Naddeo, Francesco

    2008-03-01

    A new class of spatial filters for surface electromyographic (EMG) signal detection is proposed. These filters are based on the 2-D spatial wavelet decomposition of the surface EMG recorded with a grid of electrodes and inverse transformation after zeroing a subset of the transformation coefficients. The filter transfer function depends on the selected mother wavelet in the two spatial directions. Wavelet parameterization is proposed with the aim of signal-based optimization of the transfer function of the spatial filter. The optimization criterion was the minimization of the entropy of the time samples of the output signal. The optimized spatial filter is linear and space invariant. In simulated and experimental recordings, the optimized wavelet filter showed increased selectivity with respect to previously proposed filters. For example, in simulation, the ratio between the peak-to-peak amplitude of action potentials generated by motor units 20 degrees apart in the transversal direction was 8.58% (with monopolar recording), 2.47% (double differential), 2.59% (normal double differential), and 0.47% (optimized wavelet filter). In experimental recordings, the duration of the detected action potentials decreased from (mean +/- SD) 6.9 +/- 0.3 ms (monopolar recording), to 4.5 +/- 0.2 ms (normal double differential), 3.7 +/- 0.2 (double differential), and 3.0 +/- 0.1 ms (optimized wavelet filter). In conclusion, the new class of spatial filters with the proposed signal-based optimization of the transfer function allows better discrimination of individual motor unit activities in surface EMG recordings than it was previously possible. PMID:18334382

  3. Reliability-based design optimization under stationary stochastic process loads

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Du, Xiaoping

    2016-08-01

    Time-dependent reliability-based design ensures the satisfaction of reliability requirements for a given period of time, but with a high computational cost. This work improves the computational efficiency by extending the sequential optimization and reliability analysis (SORA) method to time-dependent problems with both stationary stochastic process loads and random variables. The challenge of the extension is the identification of the most probable point (MPP) associated with time-dependent reliability targets. Since a direct relationship between the MPP and reliability target does not exist, this work defines the concept of equivalent MPP, which is identified by the extreme value analysis and the inverse saddlepoint approximation. With the equivalent MPP, the time-dependent reliability-based design optimization is decomposed into two decoupled loops: deterministic design optimization and reliability analysis, and both are performed sequentially. Two numerical examples are used to show the efficiency of the proposed method.

  4. TRUST-TECH based Methods for Optimization and Learning

    NASA Astrophysics Data System (ADS)

    Reddy, Chandan K.

    2007-12-01

    Many problems that arise in machine learning domain deal with nonlinearity and quite often demand users to obtain global optimal solutions rather than local optimal ones. Optimization problems are inherent in machine learning algorithms and hence many methods in machine learning were inherited from the optimization literature. Popularly known as the initialization problem, the ideal set of parameters required will significantly depend on the given initialization values. The recently developed TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterization) methodology systematically explores the subspace of the parameters to obtain a complete set of local optimal solutions. In this thesis work, we propose TRUST-TECH based methods for solving several optimization and machine learning problems. Two stages namely, the local stage and the neighborhood-search stage, are repeated alternatively in the solution space to achieve improvements in the quality of the solutions. Our methods were tested on both synthetic and real datasets and the advantages of using this novel framework are clearly manifested. This framework not only reduces the sensitivity to initialization, but also allows the flexibility for the practitioners to use various global and local methods that work well for a particular problem of interest. Other hierarchical stochastic algorithms like evolutionary algorithms and smoothing algorithms are also studied and frameworks for combining these methods with TRUST-TECH have been proposed and evaluated on several test systems.

  5. 78 FR 43829 - Risk-Based Capital Guidelines; Market Risk

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-22

    .... Frierson, Secretary, Board of Governors of the Federal Reserve System, 20th Street and Constitution Avenue... transparency through enhanced disclosures. \\1\\ 77 FR 53060 (August 30, 2012). The agencies' market risk rules... additional detail on this history in the preamble to the August 2012 final rule. See, 77 FR 53060,...

  6. 12 CFR 390.466 - Risk-based capital credit risk-weight categories.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Risk-based capital credit risk-weight categories. 390.466 Section 390.466 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY REGULATIONS TRANSFERRED FROM THE OFFICE OF THRIFT SUPERVISION Capital § 390.466 Risk-based capital credit...

  7. Optimal Test Design with Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.

    2013-01-01

    Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…

  8. Information fusion based optimal control for large civil aircraft system.

    PubMed

    Zhen, Ziyang; Jiang, Ju; Wang, Xinhua; Gao, Chen

    2015-03-01

    Wind disturbance has a great influence on landing security of Large Civil Aircraft. Through simulation research and engineering experience, it can be found that PID control is not good enough to solve the problem of restraining the wind disturbance. This paper focuses on anti-wind attitude control for Large Civil Aircraft in landing phase. In order to improve the riding comfort and the flight security, an information fusion based optimal control strategy is presented to restrain the wind in landing phase for maintaining attitudes and airspeed. Data of Boeing707 is used to establish a nonlinear mode with total variables of Large Civil Aircraft, and then two linear models are obtained which are divided into longitudinal and lateral equations. Based on engineering experience, the longitudinal channel adopts PID control and C inner control to keep longitudinal attitude constant, and applies autothrottle system for keeping airspeed constant, while an information fusion based optimal regulator in the lateral control channel is designed to achieve lateral attitude holding. According to information fusion estimation, by fusing hard constraint information of system dynamic equations and the soft constraint information of performance index function, optimal estimation of the control sequence is derived. Based on this, an information fusion state regulator is deduced for discrete time linear system with disturbance. The simulation results of nonlinear model of aircraft indicate that the information fusion optimal control is better than traditional PID control, LQR control and LQR control with integral action, in anti-wind disturbance performance in the landing phase. PMID:25440950

  9. Electrochemical model based charge optimization for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Pramanik, Sourav; Anwar, Sohel

    2016-05-01

    In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.

  10. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement....

  11. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement....

  12. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 8 2014-01-01 2014-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement....

  13. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 7 2011-01-01 2011-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement....

  14. 12 CFR 932.3 - Risk-based capital requirement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Risk-based capital requirement. 932.3 Section 932.3 Banks and Banking FEDERAL HOUSING FINANCE BOARD FEDERAL HOME LOAN BANK RISK MANAGEMENT AND CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.3 Risk-based capital requirement....

  15. Pixel-based ant colony algorithm for source mask optimization

    NASA Astrophysics Data System (ADS)

    Kuo, Hung-Fei; Wu, Wei-Chen; Li, Frederick

    2015-03-01

    Source mask optimization (SMO) was considered to be one of the key resolution enhancement techniques for node technology below 20 nm prior to the availability of extreme-ultraviolet tools. SMO has been shown to enlarge the process margins for the critical layer in SRAM and memory cells. In this study, a new illumination shape optimization approach was developed on the basis of the ant colony optimization (ACO) principle. The use of this heuristic pixel-based ACO method in the SMO process provides an advantage over the extant SMO method because of the gradient of the cost function associated with the rapid and stable searching capability of the proposed method. This study was conducted to provide lithographic engineers with references for the quick determination of the optimal illumination shape for complex mask patterns. The test pattern used in this study was a contact layer for SRAM design, with a critical dimension and a minimum pitch of 55 and 110 nm, respectively. The optimized freeform source shape obtained using the ACO method was numerically verified by performing an aerial image investigation, and the result showed that the optimized freeform source shape generated an aerial image profile different from the nominal image profile and with an overall error rate of 9.64%. Furthermore, the overall average critical shape difference was determined to be 1.41, which was lower than that for the other off-axis illumination exposure. The process window results showed an improvement in exposure latitude (EL) and depth of focus (DOF) for the ACO-based freeform source shape compared with those of the Quasar source shape. The maximum EL of the ACO-based freeform source shape reached 7.4% and the DOF was 56 nm at an EL of 5%.

  16. Pragmatic fluid optimization in high-risk surgery patients: when pragmatism dilutes the benefits.

    PubMed

    Reuter, Daniel A

    2012-01-01

    There is increasing evidence that hemodynamic optimization by fluid loading, particularly when performed in the early phase of surgery, is beneficial in high-risk surgery patients: it leads to a reduction in postoperative complications and even to improved long-term outcome. However, it is also true that goal- directed strategies of fluid optimization focusing on cardiac output optimization have not been applied in the clinical routine of many institutions. Reasons are manifold: disbelief in the level of evidence and on the accuracy and practicability of the required monitoring systems, and economics. The FOCCUS trial examined perioperative fluid optimization with a very basic approach: a standardized volume load with 25 ml/kg crystalloids over 6 hours immediately prior to scheduled surgery in high-risk patients. The hypothesis was that this intervention would lead to a compensation of preoperative fluid deficit caused by overnight fasting, and would result in improved perioperative fluid homeostasis with less postoperative complications and earlier hospital discharge. However, the primary study endpoints did not improve significantly. This observation points towards the facts that: firstly, the differentiation between interstitial fluid deficit caused by fasting and intravascular volume loss due to acute blood loss must be recognized in treatment strategies; secondly, the type of fluid replacement may play an important role; and thirdly, protocolized treatment strategies should also always be tailored to suit the patients' individual needs in every individual clinical situation. PMID:22410167

  17. Parallel Harmony Search Based Distributed Energy Resource Optimization

    SciTech Connect

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  18. Optimization of Polarimetric Contrast Enhancement Based on Fisher Criterion

    NASA Astrophysics Data System (ADS)

    Deng, Qiming; Chen, Jiong; Yang, Jian

    The optimization of polarimetric contrast enhancement (OPCE) is a widely used method for maximizing the received power ratio of a desired target versus an undesired target (clutter). In this letter, a new model of the OPCE is proposed based on the Fisher criterion. By introducing the well known two-class problem of linear discriminant analysis (LDA), the proposed model is to enlarge the normalized distance of mean value between the target and the clutter. In addition, a cross-iterative numerical method is proposed for solving the optimization with a quadratic constraint. Experimental results with the polarimetric SAR (POLSAR) data demonstrate the effectiveness of the proposed method.

  19. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  20. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  1. SADA: Ecological Risk Based Decision Support System for Selective Remediation

    EPA Science Inventory

    Spatial Analysis and Decision Assistance (SADA) is freeware that implements terrestrial ecological risk assessment and yields a selective remediation design using its integral geographical information system, based on ecological and risk assessment inputs. Selective remediation ...

  2. Optimization of image-based aberration metrology for EUV lithography

    NASA Astrophysics Data System (ADS)

    Levinson, Zac; Fenger, Germain; Burbine, Andrew; Schepis, Anthony R.; Smith, Bruce W.

    2014-04-01

    EUV lithography is likely more sensitive to drift from thermal and degradation effects than optical counterparts. We have developed an automated approach to photoresist image-based aberration metrology. The approach uses binary or phase mask targets and iterative simulation based solutions to retrieve an aberrated pupil function. It is well known that a partially coherent source both allows for the diffraction information of smaller features to be collected by the condenser system, and introduces pupil averaging. In general, smaller features are more sensitive to aberrations than larger features, so there is a trade-off between target sensitivity and printability. Therefore, metrology targets using this technique must be optimized for maximum sensitivity with each illumination system. This study examines aberration metrology target optimization and suggests an optimization scheme for use with any source. Interrogation of both low and high order aberrations is considered. High order aberration terms are interrogated using two separate fitting algorithms. While the optimized targets do show the lowest RMS error under the test conditions, a desirable RMS error is not achieved by either high order interrogation scheme. The implementation of a previously developed algorithm for image-based aberration metrology is used to support this work.

  3. Reliability-based analysis and design optimization for durability

    NASA Astrophysics Data System (ADS)

    Choi, Kyung K.; Youn, Byeng D.; Tang, Jun; Hardee, Edward

    2005-05-01

    In the Army mechanical fatigue subject to external and inertia transient loads in the service life of mechanical systems often leads to a structural failure due to accumulated damage. Structural durability analysis that predicts the fatigue life of mechanical components subject to dynamic stresses and strains is a compute intensive multidisciplinary simulation process, since it requires the integration of several computer-aided engineering tools and considerable data communication and computation. Uncertainties in geometric dimensions due to manufacturing tolerances cause the indeterministic nature of the fatigue life of a mechanical component. Due to the fact that uncertainty propagation to structural fatigue under transient dynamic loading is not only numerically complicated but also extremely computationally expensive, it is a challenging task to develop a structural durability-based design optimization process and reliability analysis to ascertain whether the optimal design is reliable. The objective of this paper is the demonstration of an integrated CAD-based computer-aided engineering process to effectively carry out design optimization for structural durability, yielding a durable and cost-effectively manufacturable product. This paper shows preliminary results of reliability-based durability design optimization for the Army Stryker A-Arm.

  4. Bare-Bones Teaching-Learning-Based Optimization

    PubMed Central

    Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye

    2014-01-01

    Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844

  5. Multiobjective inverse planning for intensity modulated radiotherapy with constraint-free gradient-based optimization algorithms

    NASA Astrophysics Data System (ADS)

    Lahanas, Michael; Schreibmann, Eduard; Baltas, Dimos

    2003-09-01

    We consider the behaviour of the limited memory L-BFGS algorithm as a representative constraint-free gradient-based algorithm which is used for multiobjective (MO) dose optimization for intensity modulated radiotherapy (IMRT). Using a parameter transformation, the positivity constraint problem of negative beam fluences is entirely eliminated: a feature which to date has not been fully understood by all investigators. We analyse the global convergence properties of L-BFGS by searching for the existence and the influence of possible local minima. With a fast simulated annealing (FSA) algorithm we examine whether the L-BFGS solutions are globally Pareto optimal. The three examples used in our analysis are a brain tumour, a prostate tumour and a test case with a C-shaped PTV. In 1% of the optimizations global convergence is violated. A simple mechanism practically eliminates the influence of this failure and the obtained solutions are globally optimal. A single-objective dose optimization requires less than 4 s for 5400 parameters and 40 000 sampling points. The elimination of the problem of negative beam fluences and the high computational speed permit constraint-free gradient-based optimization algorithms to be used for MO dose optimization. In this situation, a representative spectrum of possible solutions is obtained which contains information such as the trade-off between the objectives and range of dose values. Using simple decision making tools the best of all the possible solutions can be chosen. We perform an MO dose optimization for the three examples and compare the spectra of solutions, firstly using recommended critical dose values for the organs at risk and secondly, setting these dose values to zero.

  6. The Integrated Medical Model - Optimizing In-flight Space Medical Systems to Reduce Crew Health Risk and Mission Impacts

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Walton, Marlei; Minard, Charles; Saile, Lynn; Myers, Jerry; Butler, Doug; Lyengar, Sriram; Fitts, Mary; Johnson-Throop, Kathy

    2009-01-01

    The Integrated Medical Model (IMM) is a decision support tool used by medical system planners and designers as they prepare for exploration planning activities of the Constellation program (CxP). IMM provides an evidence-based approach to help optimize the allocation of in-flight medical resources for a specified level of risk within spacecraft operational constraints. Eighty medical conditions and associated resources are represented in IMM. Nine conditions are due to Space Adaptation Syndrome. The IMM helps answer fundamental medical mission planning questions such as What medical conditions can be expected? What type and quantity of medical resources are most likely to be used?", and "What is the probability of crew death or evacuation due to medical events?" For a specified mission and crew profile, the IMM effectively characterizes the sequence of events that could potentially occur should a medical condition happen. The mathematical relationships among mission and crew attributes, medical conditions and incidence data, in-flight medical resources, potential clinical and crew health end states are established to generate end state probabilities. A Monte Carlo computational method is used to determine the probable outcomes and requires up to 25,000 mission trials to reach convergence. For each mission trial, the pharmaceuticals and supplies required to diagnose and treat prevalent medical conditions are tracked and decremented. The uncertainty of patient response to treatment is bounded via a best-case, worst-case, untreated case algorithm. A Crew Health Index (CHI) metric, developed to account for functional impairment due to a medical condition, provides a quantified measure of risk and enables risk comparisons across mission scenarios. The use of historical in-flight medical data, terrestrial surrogate data as appropriate, and space medicine subject matter expertise has enabled the development of a probabilistic, stochastic decision support tool capable of

  7. On the Integration of Risk Aversion and Average-Performance Optimization in Reservoir Control

    NASA Astrophysics Data System (ADS)

    Nardini, Andrea; Piccardi, Carlo; Soncini-Sessa, Rodolfo

    1992-02-01

    The real-time operation of a reservoir is a matter of trade-off between the two criteria of risk aversion (to avoid dramatic failures) and average-performance optimization (to yield the best long-term average performance). A methodology taking into account both criteria is presented m this paper to derive "off-line" infinite-horizon control policies for a single multipurpose reservoir, where the management goals are water supply and flood control. According to this methodology, the reservoir control policy is derived in two steps: First, a (min-max) risk aversion problem is formulated, whose solution is not unique, but rather a whole set of policies, all equivalent from the point of view of the risk-aversion objectives. Second, a stochastic average-performance optimization problem is solved, to select from the set previously obtained the best policy from the point of view of the average-performance objectives. The methodology has several interesting features: the rnin-max (or "guaranteed performance") approach, which is particularly suited whenever "weak" users are affected by the consequences of the decision-making process; the flexible definition of a "risk aversion degree," by the selection of those inflow sequences which are particularly feared; and the two-objective analysis which provides the manager with a whole set of alternatives among which he (she) will select the one that yields the desired trade-off between the management goals.

  8. Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources

    SciTech Connect

    Huang, Weihong; Sun, Kai; Qi, Junjian; Xu, Yan

    2015-01-01

    Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-bus system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.

  9. Vision-based stereo ranging as an optimal control problem

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.

    1992-01-01

    The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.

  10. A Danger-Theory-Based Immune Network Optimization Algorithm

    PubMed Central

    Li, Tao; Xiao, Xin; Shi, Yuanquan

    2013-01-01

    Existing artificial immune optimization algorithms reflect a number of shortcomings, such as premature convergence and poor local search ability. This paper proposes a danger-theory-based immune network optimization algorithm, named dt-aiNet. The danger theory emphasizes that danger signals generated from changes of environments will guide different levels of immune responses, and the areas around danger signals are called danger zones. By defining the danger zone to calculate danger signals for each antibody, the algorithm adjusts antibodies' concentrations through its own danger signals and then triggers immune responses of self-regulation. So the population diversity can be maintained. Experimental results show that the algorithm has more advantages in the solution quality and diversity of the population. Compared with influential optimization algorithms, CLONALG, opt-aiNet, and dopt-aiNet, the algorithm has smaller error values and higher success rates and can find solutions to meet the accuracies within the specified function evaluation times. PMID:23483853

  11. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  12. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  13. Similarity-based global optimization of buildings in urban scene

    NASA Astrophysics Data System (ADS)

    Zhu, Quansheng; Zhang, Jing; Jiang, Wanshou

    2013-10-01

    In this paper, an approach for the similarity-based global optimization of buildings in urban scene is presented. In the past, most researches concentrated on single building reconstruction, making it difficult to reconstruct reliable models from noisy or incomplete point clouds. To obtain a better result, a new trend is to utilize the similarity among the buildings. Therefore, a new similarity detection and global optimization strategy is adopted to modify local-fitting geometric errors. Firstly, the hierarchical structure that consists of geometric, topological and semantic features is constructed to represent complex roof models. Secondly, similar roof models can be detected by combining primitive structure and connection similarities. At last, the global optimization strategy is applied to preserve the consistency and precision of similar roof structures. Moreover, non-local consolidation is adapted to detect small roof parts. The experiments reveal that the proposed method can obtain convincing roof models and promote the reconstruction quality of 3D buildings in urban scene.

  14. Level set based structural topology optimization for minimizing frequency response

    NASA Astrophysics Data System (ADS)

    Shu, Lei; Wang, Michael Yu; Fang, Zongde; Ma, Zhengdong; Wei, Peng

    2011-11-01

    For the purpose of structure vibration reduction, a structural topology optimization for minimizing frequency response is proposed based on the level set method. The objective of the present study is to minimize the frequency response at the specified points or surfaces on the structure with an excitation frequency or a frequency range, subject to the given amount of the material over the admissible design domain. The sensitivity analysis with respect to the structural boundaries is carried out, while the Extended finite element method (X-FEM) is employed for solving the state equation and the adjoint equation. The optimal structure with smooth boundaries is obtained by the level set evolution with advection velocity, derived from the sensitivity analysis and the optimization algorithm. A number of numerical examples, in the frameworks of two-dimension (2D) and three-dimension (3D), are presented to demonstrate the feasibility and effectiveness of the proposed approach.

  15. Optimal high speed CMOS inverter design using craziness based Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    De, Bishnu P.; Kar, Rajib; Mandal, Durbadal; Ghoshal, Sakti P.

    2015-07-01

    The inverter is the most fundamental logic gate that performs a Boolean operation on a single input variable. In this paper, an optimal design of CMOS inverter using an improved version of particle swarm optimization technique called Craziness based Particle Swarm Optimization (CRPSO) is proposed. CRPSO is very simple in concept, easy to implement and computationally efficient algorithm with two main advantages: it has fast, nearglobal convergence, and it uses nearly robust control parameters. The performance of PSO depends on its control parameters and may be influenced by premature convergence and stagnation problems. To overcome these problems the PSO algorithm has been modiffed to CRPSO in this paper and is used for CMOS inverter design. In birds' flocking or ffsh schooling, a bird or a ffsh often changes direction suddenly. In the proposed technique, the sudden change of velocity is modelled by a direction reversal factor associated with the previous velocity and a "craziness" velocity factor associated with another direction reversal factor. The second condition is introduced depending on a predeffned craziness probability to maintain the diversity of particles. The performance of CRPSO is compared with real code.gnetic algorithm (RGA), and conventional PSO reported in the recent literature. CRPSO based design results are also compared with the PSPICE based results. The simulation results show that the CRPSO is superior to the other algorithms for the examples considered and can be efficiently used for the CMOS inverter design.

  16. Optimal management of asymptomatic workers at high risk of bladder cancer.

    PubMed

    Schulte, P A; Ringen, K; Hemstreet, G P

    1986-01-01

    Many cohorts of industrial workers at increased risk of occupationally induced bladder cancer are still in the preclinical disease stage. A large proportion of workers in these populations have been exposed to aromatic amines, but have not yet experienced the average latent period for bladder cancer. A need exists for definition of what constitutes optimal management for asymptomatic workers in these cohorts. Promising advances in the epidemiology, pathology, detection, and treatment of bladder cancer pressure for a reassessment of current practices and the application of the most current scientific knowledge. Some of these apparent advances, however, have not yet been rigorously evaluated. The time has come to evaluate these advances so that their application can occur while high risk cohorts are still amenable to and likely to benefit from intervention. This commentary calls for such an evaluation leading to a comprehensive approach to managing cohorts at high risk of bladder cancer. PMID:3950777

  17. Modification of species-based differential evolution for multimodal optimization

    NASA Astrophysics Data System (ADS)

    Idrus, Said Iskandar Al; Syahputra, Hermawan; Firdaus, Muliawan

    2015-12-01

    At this time optimization has an important role in various fields as well as between other operational research, industry, finance and management. Optimization problem is the problem of maximizing or minimizing a function of one variable or many variables, which include unimodal and multimodal functions. Differential Evolution (DE), is a random search technique using vectors as an alternative solution in the search for the optimum. To localize all local maximum and minimum on multimodal function, this function can be divided into several domain of fitness using niching method. Species-based niching method is one of method that build sub-populations or species in the domain functions. This paper describes the modification of species-based previously to reduce the computational complexity and run more efficiently. The results of the test functions show species-based modifications able to locate all the local optima in once run the program.

  18. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET

    PubMed Central

    Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  19. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-07-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  20. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  1. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  2. Component-based integration of chemistry and optimization software.

    PubMed

    Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L

    2004-11-15

    Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform. PMID:15362128

  3. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  4. CACONET: Ant Colony Optimization (ACO) Based Clustering Algorithm for VANET.

    PubMed

    Aadil, Farhan; Bajwa, Khalid Bashir; Khan, Salabat; Chaudary, Nadeem Majeed; Akram, Adeel

    2016-01-01

    A vehicular ad hoc network (VANET) is a wirelessly connected network of vehicular nodes. A number of techniques, such as message ferrying, data aggregation, and vehicular node clustering aim to improve communication efficiency in VANETs. Cluster heads (CHs), selected in the process of clustering, manage inter-cluster and intra-cluster communication. The lifetime of clusters and number of CHs determines the efficiency of network. In this paper a Clustering algorithm based on Ant Colony Optimization (ACO) for VANETs (CACONET) is proposed. CACONET forms optimized clusters for robust communication. CACONET is compared empirically with state-of-the-art baseline techniques like Multi-Objective Particle Swarm Optimization (MOPSO) and Comprehensive Learning Particle Swarm Optimization (CLPSO). Experiments varying the grid size of the network, the transmission range of nodes, and number of nodes in the network were performed to evaluate the comparative effectiveness of these algorithms. For optimized clustering, the parameters considered are the transmission range, direction and speed of the nodes. The results indicate that CACONET significantly outperforms MOPSO and CLPSO. PMID:27149517

  5. Optimal assignment methods for ligand-based virtual screening

    PubMed Central

    2009-01-01

    Background Ligand-based virtual screening experiments are an important task in the early drug discovery stage. An ambitious aim in each experiment is to disclose active structures based on new scaffolds. To perform these "scaffold-hoppings" for individual problems and targets, a plethora of different similarity methods based on diverse techniques were published in the last years. The optimal assignment approach on molecular graphs, a successful method in the field of quantitative structure-activity relationships, has not been tested as a ligand-based virtual screening method so far. Results We evaluated two already published and two new optimal assignment methods on various data sets. To emphasize the "scaffold-hopping" ability, we used the information of chemotype clustering analyses in our evaluation metrics. Comparisons with literature results show an improved early recognition performance and comparable results over the complete data set. A new method based on two different assignment steps shows an increased "scaffold-hopping" behavior together with a good early recognition performance. Conclusion The presented methods show a good combination of chemotype discovery and enrichment of active structures. Additionally, the optimal assignment on molecular graphs has the advantage to investigate and interpret the mappings, allowing precise modifications of internal parameters of the similarity measure for specific targets. All methods have low computation times which make them applicable to screen large data sets. PMID:20150995

  6. Risk perception, risk evaluation and human values: cognitive bases of acceptability of a radioactive waste repository

    SciTech Connect

    Earle, T.C.; Lindell, M.K.; Rankin, W.L.

    1981-07-01

    Public acceptance of radioactive waste management alternatives depends in part on public perception of the associated risks. Three aspects of those perceived risks were explored in this study: (1) synthetic measures of risk perception based on judgments of probability and consequences; (2) acceptability of hypothetical radioactive waste policies, and (3) effects of human values on risk perception. Both the work on synthetic measures of risk perception and on the acceptability of hypothetical policies included investigations of three categories of risk: (1) Short-term public risk (affecting persons living when the wastes are created), (2) Long-term public risk (affecting persons living after the time the wastes were created), and (3) Occupational risk (affecting persons working with the radioactive wastes). The human values work related to public risk perception in general, across categories of persons affected. Respondents were selected according to a purposive sampling strategy.

  7. Data-based robust multiobjective optimization of interconnected processes: energy efficiency case study in papermaking.

    PubMed

    Afshar, Puya; Brown, Martin; Maciejowski, Jan; Wang, Hong

    2011-12-01

    Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method. PMID:22147299

  8. Mars Mission Optimization Based on Collocation of Resources

    NASA Technical Reports Server (NTRS)

    Chamitoff, G. E.; James, G. H.; Barker, D. C.; Dershowitz, A. L.

    2003-01-01

    This paper presents a powerful approach for analyzing Martian data and for optimizing mission site selection based on resource collocation. This approach is implemented in a program called PROMT (Planetary Resource Optimization and Mapping Tool), which provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in-situ resource utilization. Optimization results are shown for a number of mission scenarios.

  9. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    PubMed Central

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  10. Computer Based Porosity Design by Multi Phase Topology Optimization

    NASA Astrophysics Data System (ADS)

    Burblies, Andreas; Busse, Matthias

    2008-02-01

    A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.

  11. The optimal community detection of software based on complex networks

    NASA Astrophysics Data System (ADS)

    Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong

    2016-02-01

    The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.

  12. Chaos time series prediction based on membrane optimization algorithms.

    PubMed

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng; Peng, Hong

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  13. Mesh Optimization for Monte Carlo-Based Optical Tomography

    PubMed Central

    Edmans, Andrew; Intes, Xavier

    2015-01-01

    Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh. PMID:26566523

  14. [Optimized Spectral Indices Based Estimation of Forage Grass Biomass].

    PubMed

    An, Hai-bo; Li, Fei; Zhao, Meng-li; Liu, Ya-jun

    2015-11-01

    As an important indicator of forage production, aboveground biomass will directly illustrate the growth of forage grass. Therefore, Real-time monitoring biomass of forage grass play a crucial role in performing suitable grazing and management in artificial and natural grassland. However, traditional sampling and measuring are time-consuming and labor-intensive. Recently, development of hyperspectral remote sensing provides the feasibility in timely and nondestructive deriving biomass of forage grass. In the present study, the main objectives were to explore the robustness of published and optimized spectral indices in estimating biomass of forage grass in natural and artificial pasture. The natural pasture with four grazing density (control, light grazing, moderate grazing and high grazing) was designed in desert steppe, and different forage cultivars with different N rate were conducted in artificial forage fields in Inner Mongolia. The canopy reflectance and biomass in each plot were measured during critical stages. The result showed that, due to the influence in canopy structure and biomass, the canopy reflectance have a great difference in different type of forage grass. The best performing spectral index varied in different species of forage grass with different treatments (R² = 0.00-0.69). The predictive ability of spectral indices decreased under low biomass of desert steppe, while red band based spectral indices lost sensitivity under moderate-high biomass of forage maize. When band combinations of simple ratio and normalized difference spectral indices were optimized in combined datasets of natural and artificial grassland, optimized spectral indices significant increased predictive ability and the model between biomass and optimized spectral indices had the highest R² (R² = 0.72) compared to published spectral indices. Sensitive analysis further confirmed that the optimized index had the lowest noise equivalent and were the best performing index in

  15. Parameter optimization in differential geometry based solvation models.

    PubMed

    Wang, Bao; Wei, G W

    2015-10-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304

  16. Finite Element Based HWB Centerbody Structural Optimization and Weight Prediction

    NASA Technical Reports Server (NTRS)

    Gern, Frank H.

    2012-01-01

    This paper describes a scalable structural model suitable for Hybrid Wing Body (HWB) centerbody analysis and optimization. The geometry of the centerbody and primary wing structure is based on a Vehicle Sketch Pad (VSP) surface model of the aircraft and a FLOPS compatible parameterization of the centerbody. Structural analysis, optimization, and weight calculation are based on a Nastran finite element model of the primary HWB structural components, featuring centerbody, mid section, and outboard wing. Different centerbody designs like single bay or multi-bay options are analyzed and weight calculations are compared to current FLOPS results. For proper structural sizing and weight estimation, internal pressure and maneuver flight loads are applied. Results are presented for aerodynamic loads, deformations, and centerbody weight.

  17. GA-Based Image Restoration by Isophote Constraint Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Jong Bae; Kim, Hang Joon

    2003-12-01

    We propose an efficient technique for image restoration based on a genetic algorithm (GA) with an isophote constraint. In our technique, the image restoration problem is modeled as an optimization problem which, in our case, is solved by a cost function with isophote constraint that is minimized using a GA. We consider that an image is decomposed into isophotes based on connected components of constant intensity. The technique creates an optimal connection of all pairs of isophotes disconnected by a caption in the frame. For connecting the disconnected isophotes, we estimate the value of the smoothness, given by the best chromosomes of the GA and project this value in the isophote direction. Experimental results show a great possibility for automatic restoration of a region in an advertisement scene.

  18. Bell-Curve Based Evolutionary Strategies for Structural Optimization

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    2000-01-01

    Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity (Reeves 1997). However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold (Glover 1998). One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumber-some binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back (1996) and Dasgupta and Michalesicz (1997). We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.

  19. Bell-Curve Based Evolutionary Strategies for Structural Optimization

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    2001-01-01

    Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity. However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold. One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumbersome binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back and Dasgupta and Michalesicz. We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.

  20. On combining Laplacian and optimization-based mesh smoothing techniques

    SciTech Connect

    Freitag, L.A.

    1997-07-01

    Local mesh smoothing algorithms have been shown to be effective in repairing distorted elements in automatically generated meshes. The simplest such algorithm is Laplacian smoothing, which moves grid points to the geometric center of incident vertices. Unfortunately, this method operates heuristically and can create invalid meshes or elements of worse quality than those contained in the original mesh. In contrast, optimization-based methods are designed to maximize some measure of mesh quality and are very effective at eliminating extremal angles in the mesh. These improvements come at a higher computational cost, however. In this article the author proposes three smoothing techniques that combine a smart variant of Laplacian smoothing with an optimization-based approach. Several numerical experiments are performed that compare the mesh quality and computational cost for each of the methods in two and three dimensions. The author finds that the combined approaches are very cost effective and yield high-quality meshes.

  1. An Optimality-Based Fully-Distributed Watershed Ecohydrological Model

    NASA Astrophysics Data System (ADS)

    Chen, L., Jr.

    2015-12-01

    Watershed ecohydrological models are essential tools to assess the impact of climate change and human activities on hydrological and ecological processes for watershed management. Existing models can be classified as empirically based model, quasi-mechanistic and mechanistic models. The empirically based and quasi-mechanistic models usually adopt empirical or quasi-empirical equations, which may be incapable of capturing non-stationary dynamics of target processes. Mechanistic models that are designed to represent process feedbacks may capture vegetation dynamics, but often have more demanding spatial and temporal parameterization requirements to represent vegetation physiological variables. In recent years, optimality based ecohydrological models have been proposed which have the advantage of reducing the need for model calibration by assuming critical aspects of system behavior. However, this work to date has been limited to plot scale that only considers one-dimensional exchange of soil moisture, carbon and nutrients in vegetation parameterization without lateral hydrological transport. Conceptual isolation of individual ecosystem patches from upslope and downslope flow paths compromises the ability to represent and test the relationships between hydrology and vegetation in mountainous and hilly terrain. This work presents an optimality-based watershed ecohydrological model, which incorporates lateral hydrological process influence on hydrological flow-path patterns that emerge from the optimality assumption. The model has been tested in the Walnut Gulch watershed and shows good agreement with observed temporal and spatial patterns of evapotranspiration (ET) and gross primary productivity (GPP). Spatial variability of ET and GPP produced by the model match spatial distribution of TWI, SCA, and slope well over the area. Compared with the one dimensional vegetation optimality model (VOM), we find that the distributed VOM (DisVOM) produces more reasonable spatial

  2. Process optimization electrospinning fibrous material based on polyhydroxybutyrate

    NASA Astrophysics Data System (ADS)

    Olkhov, A. A.; Tyubaeva, P. M.; Staroverova, O. V.; Mastalygina, E. E.; Popov, A. A.; Ischenko, A. A.; Iordanskii, A. L.

    2016-05-01

    The article analyzes the influence of the main technological parameters of electrostatic spinning on the morphology and properties of ultrathin fibers on the basis of polyhydroxybutyrate. It is found that the electric conductivity and viscosity of the spinning solution affects the process of forming fibers macrostructure. The fiber-based materials PHB lets control geometry and optimize the viscosity and conductivity of a spinning solution. The resulting fibers have found use in medicine, particularly in the construction elements musculoskeletal.

  3. Study of risk based on web software testing

    NASA Astrophysics Data System (ADS)

    Wang, Xin

    2013-03-01

    Web-based test systems that have particular difficulties and challenges, The article points out a Web application system security risk, through the analysis of the implementation issues involved Web-based testing, proposed workflow based on Web test, And how to choose the risk of the process by adding a detailed study, Discussed the security, performance, accuracy, compatibility, reliability and other details of the risk factors. These risks need for Web application testing program be established in order to make better Web-based test plan.

  4. A Power Grid Optimization Algorithm by Observing Timing Error Risk by IR Drop

    NASA Astrophysics Data System (ADS)

    Kawakami, Yoshiyuki; Terao, Makoto; Fukui, Masahiro; Tsukiyama, Shuji

    With the advent of the deep submicron age, circuit performance is strongly impacted by process variations and the influence on the circuit delay to the power-supply voltage increases more and more due to CMOS feature size shrinkage. Power grid optimization which considers the timing error risk caused by the variations and IR drop becomes very important for stable and hi-speed operation of system-on-chip. Conventionally, a lot of power grid optimization algorithms have been proposed, and most of them use IR drop as their object functions. However, the IR drop is an indirect metric and we suspect that it is vague metric for the real goal of LSI design. In this paper, first, we propose an approach which uses the “timing error risk caused by IR drop” as a direct objective function. Second, the critical path map is introduced to express the existence of critical paths distributed in the entire chip. The timing error risk is decreased by using the critical path map and the new objective function. Some experimental results show the effectiveness.

  5. Mode-tracking based stationary-point optimization.

    PubMed

    Bergeler, Maike; Herrmann, Carmen; Reiher, Markus

    2015-07-15

    In this work, we present a transition-state optimization protocol based on the Mode-Tracking algorithm [Reiher and Neugebauer, J. Chem. Phys., 2003, 118, 1634]. By calculating only the eigenvector of interest instead of diagonalizing the full Hessian matrix and performing an eigenvector following search based on the selectively calculated vector, we can efficiently optimize transition-state structures. The initial guess structures and eigenvectors are either chosen from a linear interpolation between the reactant and product structures, from a nudged-elastic band search, from a constrained-optimization scan, or from the minimum-energy structures. Alternatively, initial guess vectors based on chemical intuition may be defined. We then iteratively refine the selected vectors by the Davidson subspace iteration technique. This procedure accelerates finding transition states for large molecules of a few hundred atoms. It is also beneficial in cases where the starting structure is very different from the transition-state structure or where the desired vector to follow is not the one with lowest eigenvalue. Explorative studies of reaction pathways are feasible by following manually constructed molecular distortions. PMID:26073318

  6. Biological Based Risk Assessment for Space Exploration

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.

    2011-01-01

    Exposures from galactic cosmic rays (GCR) - made up of high-energy protons and high-energy and charge (HZE) nuclei, and solar particle events (SPEs) - comprised largely of low- to medium-energy protons are the primary health concern for astronauts for long-term space missions. Experimental studies have shown that HZE nuclei produce both qualitative and quantitative differences in biological effects compared to terrestrial radiation, making risk assessments for cancer and degenerative risks, such as central nervous system effects and heart disease, highly uncertain. The goal for space radiation protection at NASA is to be able to reduce the uncertainties in risk assessments for Mars exploration to be small enough to ensure acceptable levels of risks are not exceeded and to adequately assess the efficacy of mitigation measures such as shielding or biological countermeasures. We review the recent BEIR VII and UNSCEAR-2006 models of cancer risks and their uncertainties. These models are shown to have an inherent 2-fold uncertainty as defined by ratio of the 95% percent confidence level to the mean projection, even before radiation quality is considered. In order to overcome the uncertainties in these models, new approaches to risk assessment are warranted. We consider new computational biology approaches to modeling cancer risks. A basic program of research that includes stochastic descriptions of the physics and chemistry of radiation tracks and biochemistry of metabolic pathways, to emerging biological understanding of cellular and tissue modifications leading to cancer is described.

  7. Nanodosimetry-Based Plan Optimization for Particle Therapy

    PubMed Central

    Casiraghi, Margherita; Schulte, Reinhard W.

    2015-01-01

    Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. PMID:26167202

  8. Nanodosimetry-Based Plan Optimization for Particle Therapy.

    PubMed

    Casiraghi, Margherita; Schulte, Reinhard W

    2015-01-01

    Treatment planning for particle therapy is currently an active field of research due uncertainty in how to modify physical dose in order to create a uniform biological dose response in the target. A novel treatment plan optimization strategy based on measurable nanodosimetric quantities rather than biophysical models is proposed in this work. Simplified proton and carbon treatment plans were simulated in a water phantom to investigate the optimization feasibility. Track structures of the mixed radiation field produced at different depths in the target volume were simulated with Geant4-DNA and nanodosimetric descriptors were calculated. The fluences of the treatment field pencil beams were optimized in order to create a mixed field with equal nanodosimetric descriptors at each of the multiple positions in spread-out particle Bragg peaks. For both proton and carbon ion plans, a uniform spatial distribution of nanodosimetric descriptors could be obtained by optimizing opposing-field but not single-field plans. The results obtained indicate that uniform nanodosimetrically weighted plans, which may also be radiobiologically uniform, can be obtained with this approach. Future investigations need to demonstrate that this approach is also feasible for more complicated beam arrangements and that it leads to biologically uniform response in tumor cells and tissues. PMID:26167202

  9. Optimal network topology for structural robustness based on natural connectivity

    NASA Astrophysics Data System (ADS)

    Peng, Guan-sheng; Wu, Jun

    2016-02-01

    The structural robustness of the infrastructure of various real-life systems, which can be represented by networks, is of great importance. Thus we have proposed a tabu search algorithm to optimize the structural robustness of a given network by rewiring the links and fixing the node degrees. The objective of our algorithm is to maximize a new structural robustness measure, natural connectivity, which provides a sensitive and reliable measure of the structural robustness of complex networks and has lower computation complexity. We initially applied this method to several networks with different degree distributions for contrast analysis and investigated the basic properties of the optimal network. We discovered that the optimal network based on the power-law degree distribution exhibits a roughly "eggplant-like" topology, where there is a cluster of high-degree nodes at the head and other low-degree nodes scattered across the body of "eggplant". Additionally, the cost to rewire links in practical applications is considered; therefore, we optimized this method by employing the assortative rewiring strategy and validated its efficiency.

  10. A global optimization paradigm based on change of measures.

    PubMed

    Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan

    2015-07-01

    A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as 'scrambling' and 'selection'. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268

  11. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.

    PubMed

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-01-01

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500

  12. Efficient variational Bayesian approximation method based on subspace optimization.

    PubMed

    Zheng, Yuling; Fraysse, Aurélia; Rodet, Thomas

    2015-02-01

    Variational Bayesian approximations have been widely used in fully Bayesian inference for approximating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large dimensional problems. To address this problem, we propose in this paper a more efficient VBA method. Actually, variational Bayesian issue can be seen as a functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the involved function space, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and demonstrate its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show a notable improvement in computation time. PMID:25532179

  13. A global optimization paradigm based on change of measures

    PubMed Central

    Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan

    2015-01-01

    A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as ‘scrambling’ and ‘selection’. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268

  14. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization

    PubMed Central

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-01-01

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500

  15. Heuristic ternary error-correcting output codes via weight optimization and layered clustering-based approach.

    PubMed

    Zhang, Xiao-Lei

    2015-02-01

    One important classifier ensemble for multiclass classification problems is error-correcting output codes (ECOCs). It bridges multiclass problems and binary-class classifiers by decomposing multiclass problems to a serial binary-class problems. In this paper, we present a heuristic ternary code, named weight optimization and layered clustering-based ECOC (WOLC-ECOC). It starts with an arbitrary valid ECOC and iterates the following two steps until the training risk converges. The first step, named layered clustering-based ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing binary-class problem. The second step adds the new classifiers to ECOC by a novel optimized weighted (OW) decoding algorithm, where the optimization problem of the decoding is solved by the cutting plane algorithm. Technically, LC-ECOC makes the heuristic training process not blocked by some difficult binary-class problem. OW decoding guarantees the nonincrease of the training risk for ensuring a small code length. Results on 14 UCI datasets and a music genre classification problem demonstrate the effectiveness of WOLC-ECOC. PMID:25486660

  16. Credibility theory based dynamic control bound optimization for reservoir flood limited water level

    NASA Astrophysics Data System (ADS)

    Jiang, Zhiqiang; Sun, Ping; Ji, Changming; Zhou, Jianzhong

    2015-10-01

    The dynamic control operation of reservoir flood limited water level (FLWL) can solve the contradictions between reservoir flood control and beneficial operation well, and it is an important measure to make sure the security of flood control and realize the flood utilization. The dynamic control bound of FLWL is a fundamental key element for implementing reservoir dynamic control operation. In order to optimize the dynamic control bound of FLWL by considering flood forecasting error, this paper took the forecasting error as a fuzzy variable, and described it with the emerging credibility theory in recent years. By combining the flood forecasting error quantitative model, a credibility-based fuzzy chance constrained model used to optimize the dynamic control bound was proposed in this paper, and fuzzy simulation technology was used to solve the model. The FENGTAN reservoir in China was selected as a case study, and the results show that, compared with the original operation water level, the initial operation water level (IOWL) of FENGTAN reservoir can be raised 4 m, 2 m and 5.5 m respectively in the three division stages of flood season, and without increasing flood control risk. In addition, the rationality and feasibility of the proposed forecasting error quantitative model and credibility-based dynamic control bound optimization model are verified by the calculation results of extreme risk theory.

  17. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    NASA Astrophysics Data System (ADS)

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  18. Rapid pedobarographic image registration based on contour curvature and optimization.

    PubMed

    Oliveira, Francisco P M; Tavares, João Manuel R S; Pataky, Todd C

    2009-11-13

    Image registration, the process of optimally aligning homologous structures in multiple images, has recently been demonstrated to support automated pixel-level analysis of pedobarographic images and, subsequently, to extract unique and biomechanically relevant information from plantar pressure data. Recent registration methods have focused on robustness, with slow but globally powerful algorithms. In this paper, we present an alternative registration approach that affords both speed and accuracy, with the goal of making pedobarographic image registration more practical for near-real-time laboratory and clinical applications. The current algorithm first extracts centroid-based curvature trajectories from pressure image contours, and then optimally matches these curvature profiles using optimization based on dynamic programming. Special cases of disconnected images (that occur in high-arched subjects, for example) are dealt with by introducing an artificial spatially linear bridge between adjacent image clusters. Two registration algorithms were developed: a 'geometric' algorithm, which exclusively matched geometry, and a 'hybrid' algorithm, which performed subsequent pseudo-optimization. After testing the two algorithms on 30 control image pairs considered in a previous study, we found that, when compared with previously published results, the hybrid algorithm improved overlap ratio (p=0.010), but both current algorithms had slightly higher mean-squared error, assumedly because they did not consider pixel intensity. Nonetheless, both algorithms greatly improved the computational efficiency (25+/-8 and 53+/-9 ms per image pair for geometric and hybrid registrations, respectively). These results imply that registration-based pixel-level pressure image analyses can, eventually, be implemented for practical clinical purposes. PMID:19647829

  19. Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization.

    PubMed

    He, Xiangzhu; Huang, Jida; Rao, Yunqing; Gao, Liang

    2016-01-01

    Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO. PMID:26941785

  20. Chaotic Teaching-Learning-Based Optimization with Lévy Flight for Global Numerical Optimization

    PubMed Central

    He, Xiangzhu; Huang, Jida; Rao, Yunqing; Gao, Liang

    2016-01-01

    Recently, teaching-learning-based optimization (TLBO), as one of the emerging nature-inspired heuristic algorithms, has attracted increasing attention. In order to enhance its convergence rate and prevent it from getting stuck in local optima, a novel metaheuristic has been developed in this paper, where particular characteristics of the chaos mechanism and Lévy flight are introduced to the basic framework of TLBO. The new algorithm is tested on several large-scale nonlinear benchmark functions with different characteristics and compared with other methods. Experimental results show that the proposed algorithm outperforms other algorithms and achieves a satisfactory improvement over TLBO. PMID:26941785

  1. Managing simulation-based training: A framework for optimizing learning, cost, and time

    NASA Astrophysics Data System (ADS)

    Richmond, Noah Joseph

    This study provides a management framework for optimizing training programs for learning, cost, and time when using simulation based training (SBT) and reality based training (RBT) as resources. Simulation is shown to be an effective means for implementing activity substitution as a way to reduce risk. The risk profile of 22 US Air Force vehicles are calculated, and the potential risk reduction is calculated under the assumption of perfect substitutability of RBT and SBT. Methods are subsequently developed to relax the assumption of perfect substitutability. The transfer effectiveness ratio (TER) concept is defined and modeled as a function of the quality of the simulator used, and the requirements of the activity trained. The Navy F/A-18 is then analyzed in a case study illustrating how learning can be maximized subject to constraints in cost and time, and also subject to the decision maker's preferences for the proportional and absolute use of simulation. Solution methods for optimizing multiple activities across shared resources are next provided. Finally, a simulation strategy including an operations planning program (OPP), an implementation program (IP), an acquisition program (AP), and a pedagogical research program (PRP) is detailed. The study provides the theoretical tools to understand how to leverage SBT, a case study demonstrating these tools' efficacy, and a set of policy recommendations to enable the US military to better utilize SBT in the future.

  2. Air Quality Monitoring: Risk-Based Choices

    NASA Technical Reports Server (NTRS)

    James, John T.

    2009-01-01

    Air monitoring is secondary to rigid control of risks to air quality. Air quality monitoring requires us to target the credible residual risks. Constraints on monitoring devices are severe. Must transition from archival to real-time, on-board monitoring. Must provide data to crew in a way that they can interpret findings. Dust management and monitoring may be a major concern for exploration class missions.

  3. Promoting justified risk-based decisions in contaminated land management.

    PubMed

    Reinikainen, Jussi; Sorvari, Jaana

    2016-09-01

    Decision making and regulatory policies on contaminated land management (CLM) are commonly governed by risk assessment. Risk assessment, thus, has to comply with legislation, but also provide valid information in terms of actual risks to correctly focus the potentially required measures and allocate the available resources. Hence, reliable risk assessment is a prerequisite for justified and sustainable risk management. This paper gives an introduction to the Finnish risk-based regulatory framework, outlines the challenges within the policies and the practice and provides an overview of the new guidance document to promote risk-based and sustainable CLM. We argue that the current risk assessment approaches in the policy frameworks are not necessarily efficient enough in supporting justified risk-based decisions. One of the main reasons for this is the excessive emphasis put on conservative risk assessments and on generic guideline values without contributing to their appropriate application. This paper presents how some of the challenges in risk-based decision making have been tackled in the Finnish regulatory framework on contaminated land. We believe that our study will also stimulate interest with regard to policy frameworks in other countries. PMID:26767620

  4. Stochastically optimized monocular vision-based navigation and guidance

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoko

    -effort guidance (MEG) law for multiple target tracking is applied for a guidance design to achieve the mission. Through simulations, it is shown that the control effort can be reduced by using the MEG-based guidance design instead of a conventional proportional navigation-based one. The navigation and guidance designs are implemented and evaluated in a 6 DoF UAV flight simulation. Furthermore, the vision-based obstacle avoidance system is also tested in a flight test using a balloon as an obstacle. For monocular vision-based control problems, it is well-known that the separation principle between estimation and control does not hold. In other words, that vision-based estimation performance highly depends on the relative motion of the vehicle with respect to the target. Therefore, this thesis aims to derive an optimal guidance law to achieve a given mission under the condition of using the EKF-based relative navigation. Unlike many other works on observer trajectory optimization, this thesis suggests a stochastically optimized guidance design that minimizes the expected value of a cost function of the guidance error and the control effort subject to the EKF prediction and update procedures. A suboptimal guidance law is derived based on an idea of the one-step-ahead (OSA) optimization, in which the optimization is performed under the assumption that there will be only one more final measurement at the one time step ahead. The OSA suboptimal guidance law is applied to problems of vision-based rendezvous and vision-based obstacle avoidance. Simulation results are presented to show that the suggested guidance law significantly improves the guidance performance. The OSA suboptimal optimization approach is generalized as the n-step-ahead (nSA) optimization for an arbitrary number of n. Furthermore, the nSA suboptimal guidance law is extended to the p %-ahead suboptimal guidance by changing the value of n at each time step depending on the current time. The nSA (including the OSA) and

  5. Optimizing legacy molecular dynamics software with directive-based offload

    SciTech Connect

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-05-14

    The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.

  6. Optimizing legacy molecular dynamics software with directive-based offload

    DOE PAGESBeta

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-05-14

    The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less

  7. Blunt-body drag reduction through base cavity shape optimization

    NASA Astrophysics Data System (ADS)

    Lorite-Díez, Manuel; Jiménez-González, José Ignacio; Gutiérrez-Montes, Cándido; Martínez-Bazán, Carlos

    2015-11-01

    We present a numerical study on the drag reduction of a turbulent incompressible flow around two different blunt bodies, of height H and length L, at a Reynolds number Re = ρU∞ H / μ = 2000 , where U∞ is the turbulent incompressible free-stream velocity, ρ is their density and μ their viscosity. The study is based on the optimization of the geometry of a cavity placed at the rear part of the body with the aim of increasing the base pressure. Thus, we have used an optimization algorithm, which implements the adjoint method, to compute the two-dimensional incompressible turbulent steady flow sensitivity field of axial forces on both bodies, and consequently modify the shape of the cavity to reduce the induced drag force. In addition, we have performed three dimensional numerical simulations using an IDDES model in order to analyze the drag reduction effect of the optimized cavities at higher Reynolds numbers.The results show average drag reductions of 17 and 25 % for Re=2000, as well as more regularized and less chaotic wake flows in both bodies. Supported by the Spanish MINECO, Junta de Andalucía and EU Funds under projects DPI2014-59292-C3-3-P and P11-TEP7495.

  8. Optimizing legacy molecular dynamics software with directive-based offload

    NASA Astrophysics Data System (ADS)

    Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.

    2015-10-01

    Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel®  Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.

  9. Constrained Multiobjective Optimization Algorithm Based on Immune System Model.

    PubMed

    Qian, Shuqu; Ye, Yongqiang; Jiang, Bin; Wang, Jianhong

    2016-09-01

    An immune optimization algorithm, based on the model of biological immune system, is proposed to solve multiobjective optimization problems with multimodal nonlinear constraints. First, the initial population is divided into feasible nondominated population and infeasible/dominated population. The feasible nondominated individuals focus on exploring the nondominated front through clone and hypermutation based on a proposed affinity design approach, while the infeasible/dominated individuals are exploited and improved via the simulated binary crossover and polynomial mutation operations. And then, to accelerate the convergence of the proposed algorithm, a transformation technique is applied to the combined population of the above two offspring populations. Finally, a crowded-comparison strategy is used to create the next generation population. In numerical experiments, a series of benchmark constrained multiobjective optimization problems are considered to evaluate the performance of the proposed algorithm and it is also compared to several state-of-art algorithms in terms of the inverted generational distance and hypervolume indicators. The results indicate that the new method achieves competitive performance and even statistically significant better results than previous algorithms do on most of the benchmark suite. PMID:26285230

  10. Optimal alignment of mirror based pentaprisms for scanning deflectometric devices

    SciTech Connect

    Barber, Samuel K.; Geckeler, Ralf D.; Yashchuk, Valeriy V.; Gubarev, Mikhail V.; Buchheim, Jana; Siewert, Frank; Zeschke, Thomas

    2011-03-04

    In the recent work [Proc. of SPIE 7801, 7801-2/1-12 (2010), Opt. Eng. 50(5) (2011), in press], we have reported on improvement of the Developmental Long Trace Profiler (DLTP), a slope measuring profiler available at the Advanced Light Source Optical Metrology Laboratory, achieved by replacing the bulk pentaprism with a mirror based pentaprism (MBPP). An original experimental procedure for optimal mutual alignment of the MBPP mirrors has been suggested and verified with numerical ray tracing simulations. It has been experimentally shown that the optimally aligned MBPP allows the elimination of systematic errors introduced by inhomogeneity of the optical material and fabrication imperfections of the bulk pentaprism. In the present article, we provide the analytical derivation and verification of easily executed optimal alignment algorithms for two different designs of mirror based pentaprisms. We also provide an analytical description for the mechanism for reduction of the systematic errors introduced by a typical high quality bulk pentaprism. It is also shown that residual misalignments of an MBPP introduce entirely negligible systematic errors in surface slope measurements with scanning deflectometric devices.

  11. Divergent nematic susceptibility of optimally doped Fe-based superconductors

    NASA Astrophysics Data System (ADS)

    Chu, Jiun-Haw; Kuo, Hsueh-Hui; Fisher, Ian

    2015-03-01

    By performing differential elastoresistivity measurements on a wider range of iron based superconductors, including electron doped (Ba(Fe1-xCox)2As2, Ba(Fe1-xNix)2As2),holedoped(Ba1-xKxFe2As2), isovalent substituted pnictides (BaFe2(As1-xPx)2) and chalcogenides (FeTe1-xSex), we show that a divergent nematic susceptibility in the B2g symmetry channel appears to be a generic feature of optimally doped compositions. For the specific case of optimally ``doped'' BaFe2(As1-xPx)2, the nematic susceptibility can be well fitted by a Curie-Weiss temperature dependence with critical temperature close to zero, consistent with expectations of quantum critical behavior in the absence of disorder. However for all the other optimal doped iron based superconductors, the nematic susceptibility exhibits a downward deviation from Curie-Weiss behavior, suggestive of an important role played by disorder.

  12. OPTIMIZATION BIAS IN ENERGY-BASED STRUCTURE PREDICTION

    PubMed Central

    Petrella, Robert J.

    2014-01-01

    Physics-based computational approaches to predicting the structure of macromolecules such as proteins are gaining increased use, but there are remaining challenges. In the current work, it is demonstrated that in energy-based prediction methods, the degree of optimization of the sampled structures can influence the prediction results. In particular, discrepancies in the degree of local sampling can bias the predictions in favor of the oversampled structures by shifting the local probability distributions of the minimum sampled energies. In simple systems, it is shown that the magnitude of the errors can be calculated from the energy surface, and for certain model systems, derived analytically. Further, it is shown that for energy wells whose forms differ only by a randomly assigned energy shift, the optimal accuracy of prediction is achieved when the sampling around each structure is equal. Energy correction terms can be used in cases of unequal sampling to reproduce the total probabilities that would occur under equal sampling, but optimal corrections only partially restore the prediction accuracy lost to unequal sampling. For multiwell systems, the determination of the correction terms is a multibody problem; it is shown that the involved cross-correlation multiple integrals can be reduced to simpler integrals. The possible implications of the current analysis for macromolecular structure prediction are discussed. PMID:25552783

  13. Probabilistic risk assessment techniques help in identifying optimal equipment design for in-situ vitrification

    SciTech Connect

    Lucero, V.; Meale, B.M.; Purser, F.E.

    1990-01-01

    The analysis discussed in this paper was performed as part of the buried waste remediation efforts at the Idaho National Engineering Laboratory (INEL). The specific type of remediation discussed herein involves a thermal treatment process for converting contaminated soil and waste into a stable, chemically-inert form. Models of the proposed process were developed using probabilistic risk assessment (PRA) fault tree and event tree modeling techniques. The models were used to determine the appropriateness of the conceptual design by identifying potential hazards of system operations. Additional models were developed to represent the reliability aspects of the system components. By performing various sensitivities with the models, optimal design modifications are being identified to substantiate an integrated, cost-effective design representing minimal risk to the environment and/or public with maximum component reliability. 4 figs.

  14. Optimal structural design of the midship of a VLCC based on the strategy integrating SVM and GA

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Deyu

    2012-03-01

    In this paper a hybrid process of modeling and optimization, which integrates a support vector machine (SVM) and genetic algorithm (GA), was introduced to reduce the high time cost in structural optimization of ships. SVM, which is rooted in statistical learning theory and an approximate implementation of the method of structural risk minimization, can provide a good generalization performance in metamodeling the input-output relationship of real problems and consequently cuts down on high time cost in the analysis of real problems, such as FEM analysis. The GA, as a powerful optimization technique, possesses remarkable advantages for the problems that can hardly be optimized with common gradient-based optimization methods, which makes it suitable for optimizing models built by SVM. Based on the SVM-GA strategy, optimization of structural scantlings in the midship of a very large crude carrier (VLCC) ship was carried out according to the direct strength assessment method in common structural rules (CSR), which eventually demonstrates the high efficiency of SVM-GA in optimizing the ship structural scantlings under heavy computational complexity. The time cost of this optimization with SVM-GA has been sharply reduced, many more loops have been processed within a small amount of time and the design has been improved remarkably.

  15. Neural network based optimal control of HVAC&R systems

    NASA Astrophysics Data System (ADS)

    Ning, Min

    Heating, Ventilation, Air-Conditioning and Refrigeration (HVAC&R) systems have wide applications in providing a desired indoor environment for different types of buildings. It is well acknowledged that 30%-40% of the total energy generated is consumed by buildings and HVAC&R systems alone account for more than 50% of the building energy consumption. Low operational efficiency especially under partial load conditions and poor control are part of reasons for such high energy consumption. To improve energy efficiency, HVAC&R systems should be properly operated to maintain a comfortable and healthy indoor environment under dynamic ambient and indoor conditions with the least energy consumption. This research focuses on the optimal operation of HVAC&R systems. The optimization problem is formulated and solved to find the optimal set points for the chilled water supply temperature, discharge air temperature and AHU (air handling unit) fan static pressure such that the indoor environment is maintained with the least chiller and fan energy consumption. To achieve this objective, a dynamic system model is developed first to simulate the system behavior under different control schemes and operating conditions. The system model is modular in structure, which includes a water-cooled vapor compression chiller model and a two-zone VAV system model. A fuzzy-set based extended transformation approach is then applied to investigate the uncertainties of this model caused by uncertain parameters and the sensitivities of the control inputs with respect to the interested model outputs. A multi-layer feed forward neural network is constructed and trained in unsupervised mode to minimize the cost function which is comprised of overall energy cost and penalty cost when one or more constraints are violated. After training, the network is implemented as a supervisory controller to compute the optimal settings for the system. In order to implement the optimal set points predicted by the

  16. Optimizing transition states via kernel-based machine learning.

    PubMed

    Pozun, Zachary D; Hansen, Katja; Sheppard, Daniel; Rupp, Matthias; Müller, Klaus-Robert; Henkelman, Graeme

    2012-05-01

    We present a method for optimizing transition state theory dividing surfaces with support vector machines. The resulting dividing surfaces require no a priori information or intuition about reaction mechanisms. To generate optimal dividing surfaces, we apply a cycle of machine-learning and refinement of the surface by molecular dynamics sampling. We demonstrate that the machine-learned surfaces contain the relevant low-energy saddle points. The mechanisms of reactions may be extracted from the machine-learned surfaces in order to identify unexpected chemically relevant processes. Furthermore, we show that the machine-learned surfaces significantly increase the transmission coefficient for an adatom exchange involving many coupled degrees of freedom on a (100) surface when compared to a distance-based dividing surface. PMID:22583204

  17. Stress-Based Crossover Operator for Structural Topology Optimization

    NASA Astrophysics Data System (ADS)

    Li, Cuimin; Hiroyasu, Tomoyuki; Miki, Mitsunori

    In this paper, we propose a stress-based crossover (SX) operator to solve the checkerboard-like material distributation and disconnected topology that is common for simple genetic algorithm (SGA) to structural topology optimization problems (STOPs). A penalty function is defined to evaluate the fitness of each individual. A number of constrained problems are adopted to experiment the effectiveness of SX for STOPs. Comparison of 2-point crossover (2X) with SX indicates that SX can markedly suppress the checkerboard-like material distribution phenomena. Comparison of evolutionary structural optimization (ESO) and SX demonstrates the global search ability and flexibility of SX. Experiments of a Michell-type problem verifies the effectiveness of SX for STOPs. For a multi-loaded problem, SX searches out alternate solutions on the same parameters that shows the global search ability of GA.

  18. A new approach to optimization-based defibrillation.

    PubMed

    Muzdeka, S; Barbieri, E

    2001-01-01

    The purpose of this paper is to develop a new model for optimal cardiac defibrillation, based on simultaneous minimization of energy consumption and defibrillation time requirements. In order to generate optimal defibrillation waveforms that will accomplish the objective stated above, one parameter rho has been introduced as a part of the performance measure to weigh the relative importance of time and energy. All the results of this theoretical study have been obtained for the proposed model, under the assumption that cardiac tissue can be represented by a simple parallel resistor-capacitor circuit. It is well known from modern control theory that the selection of a numerical value of the weight factor is the matter of subjective judgment of a designer. However, it has been shown that defining a cost function can help in selecting a value for rho. Some results of the mathematical development of the algorithm and computer simulations will be included in the paper. PMID:11347410

  19. Physics-Based Prognostics for Optimizing Plant Operation

    SciTech Connect

    Leonard J. Bond; Don B. Jarrell

    2005-03-01

    Scientists at the Pacific Northwest National Laboratory (PNNL) have examined the necessity for optimization of energy plant operation using 'DSOM{reg_sign}'--Decision Support Operation and Maintenance and this has been deployed at several sites. This approach has been expanded to include a prognostics components and tested on a pilot scale service water system, modeled on the design employed in a nuclear power plant. A key element in plant optimization is understanding and controlling the aging process of safety-specific nuclear plant components. This paper reports the development and demonstration of a physics-based approach to prognostic analysis that combines distributed computing, RF data links, the measurement of aging precursor metrics and their correlation with degradation rate and projected machine failure.

  20. Convergence and optimization of agent-based coalition formation

    NASA Astrophysics Data System (ADS)

    Wang, Yuanshi; Wu, Hong

    2005-03-01

    In this paper, we analyze the model of agent-based coalition formation in markets. Our goal is to study the convergence of the coalition formation and optimize agents’ strategies. We show that the model has a unique steady state (equilibrium) and prove that all solutions converge to it in the case that the maximum size of coalitions is not larger than three. The stability of the steady state in other cases is not studied while numerical simulations are given to show the convergence. The steady state, which determines both the global system gain and the average gain per agent, is expressed by the agents’ strategies in the coalition formation. Through the steady state, we give the relationship between the gains and the agents’ strategies, and present a series of results for the optimization of agents’ strategies.

  1. Optimization of surface acoustic wave-based rate sensors.

    PubMed

    Xu, Fangqian; Wang, Wen; Shao, Xiuting; Liu, Xinlu; Liang, Yong

    2015-01-01

    The optimization of an surface acoustic wave (SAW)-based rate sensor incorporating metallic dot arrays was performed by using the approach of partial-wave analysis in layered media. The optimal sensor chip designs, including the material choice of piezoelectric crystals and metallic dots, dot thickness, and sensor operation frequency were determined theoretically. The theoretical predictions were confirmed experimentally by using the developed SAW sensor composed of differential delay line-oscillators and a metallic dot array deposited along the acoustic wave propagation path of the SAW delay lines. A significant improvement in sensor sensitivity was achieved in the case of 128° YX LiNbO₃, and a thicker Au dot array, and low operation frequency were used to structure the sensor. PMID:26473865

  2. Optimization of Surface Acoustic Wave-Based Rate Sensors

    PubMed Central

    Xu, Fangqian; Wang, Wen; Shao, Xiuting; Liu, Xinlu; Liang, Yong

    2015-01-01

    The optimization of an surface acoustic wave (SAW)-based rate sensor incorporating metallic dot arrays was performed by using the approach of partial-wave analysis in layered media. The optimal sensor chip designs, including the material choice of piezoelectric crystals and metallic dots, dot thickness, and sensor operation frequency were determined theoretically. The theoretical predictions were confirmed experimentally by using the developed SAW sensor composed of differential delay line-oscillators and a metallic dot array deposited along the acoustic wave propagation path of the SAW delay lines. A significant improvement in sensor sensitivity was achieved in the case of 128° YX LiNbO3, and a thicker Au dot array, and low operation frequency were used to structure the sensor. PMID:26473865

  3. Optimizing Locomotion Controllers Using Biologically-Based Actuators and Objectives

    PubMed Central

    Wang, Jack M.; Hamner, Samuel R.; Delp, Scott L.; Koltun, Vladlen

    2015-01-01

    We present a technique for automatically synthesizing walking and running controllers for physically-simulated 3D humanoid characters. The sagittal hip, knee, and ankle degrees-of-freedom are actuated using a set of eight Hill-type musculotendon models in each leg, with biologically-motivated control laws. The parameters of these control laws are set by an optimization procedure that satisfies a number of locomotion task terms while minimizing a biological model of metabolic energy expenditure. We show that the use of biologically-based actuators and objectives measurably increases the realism of gaits generated by locomotion controllers that operate without the use of motion capture data, and that metabolic energy expenditure provides a simple and unifying measurement of effort that can be used for both walking and running control optimization. PMID:26251560

  4. Optimizing Dendritic Cell-Based Approaches for Cancer Immunotherapy

    PubMed Central

    Datta, Jashodeep; Terhune, Julia H.; Lowenfeld, Lea; Cintolo, Jessica A.; Xu, Shuwen; Roses, Robert E.; Czerniecki, Brian J.

    2014-01-01

    Dendritic cells (DC) are professional antigen-presenting cells uniquely suited for cancer immunotherapy. They induce primary immune responses, potentiate the effector functions of previously primed T-lymphocytes, and orchestrate communication between innate and adaptive immunity. The remarkable diversity of cytokine activation regimens, DC maturation states, and antigen-loading strategies employed in current DC-based vaccine design reflect an evolving, but incomplete, understanding of optimal DC immunobiology. In the clinical realm, existing DC-based cancer immunotherapy efforts have yielded encouraging but inconsistent results. Despite recent U.S. Federal and Drug Administration (FDA) approval of DC-based sipuleucel-T for metastatic castration-resistant prostate cancer, clinically effective DC immunotherapy as monotherapy for a majority of tumors remains a distant goal. Recent work has identified strategies that may allow for more potent “next-generation” DC vaccines. Additionally, multimodality approaches incorporating DC-based immunotherapy may improve clinical outcomes. PMID:25506283

  5. Risk-based versus deterministic explosives safety criteria

    SciTech Connect

    Wright, R.E.

    1996-12-01

    The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.

  6. Behavior-Based Safety and Occupational Risk Management

    ERIC Educational Resources Information Center

    Geller, E. Scott

    2005-01-01

    The behavior-based approach to managing occupational risk and preventing workplace injuries is reviewed. Unlike the typical top-down control approach to industrial safety, behavior-based safety (BBS) provides tools and procedures workers can use to take personal control of occupational risks. Strategies the author and his colleagues have been…

  7. Strategies for Model Reduction: Comparing Different Optimal Bases.

    NASA Astrophysics Data System (ADS)

    Crommelin, D. T.; Majda, A. J.

    2004-09-01

    Several different ways of constructing optimal bases for efficient dynamical modeling are compared: empirical orthogonal functions (EOFs), optimal persistence patterns (OPPs), and principal interaction patterns (PIPs). Past studies on fluid-dynamical topics have pointed out that EOF-based models can have difficulties reproducing behavior dominated by irregular transitions between different dynamical states. This issue is addressed in a geophysical context, by assessing the ability of these strategies for efficient dynamical modeling to reproduce the chaotic regime transitions in a simple atmosphere model. The atmosphere model is the well-known Charney DeVore model, a six-dimensional truncation of the equations describing barotropic flow over topography in a β-plane channel geometry. This model is able to generate regime transitions for well-chosen parameter settings. The models based on PIPs are found to be superior to the EOF- and OPP-based models, in spite of some undesirable sensitivities inherent to the PIP method.


  8. Interdependency between Risk Assessments for Self and Other in the Field of Comparative Optimism: The Contribution of Response Times

    ERIC Educational Resources Information Center

    Spitzenstetter, Florence; Schimchowitsch, Sarah

    2012-01-01

    By introducing a response-time measure in the field of comparative optimism, this study was designed to explore how people estimate risk to self and others depending on the evaluation order (self/other or other/self). Our results show the interdependency between self and other answers. Indeed, while response time for risk assessment for the self…

  9. Risk-based decisionmaking in the DOE: Challenges and status

    SciTech Connect

    Henry, C.J.; Alchowiak, J.; Moses, M.

    1995-12-31

    The primary mission of the Environmental Management Program is to protect human health and the environment, the first goal of which must be, to address urgent risks and threats. Another is to provide for a safe workplace. Without credible risk assessments and good risk management practices, the central environmental goals cannot be met. Principles for risk analysis which include principles for risk assessment, management, communication, and priority setting were adopted. As recommended, Environmental Management is using risk-based decision making in its budget process and in the implementation of its program. The challenges presented in using a risk-based Decision making process are to integrate risk assessment methods and cultural and social values so as to produce meaningful priorities. The different laws and regulations governing the Department define risk differently in implementing activities to protect human health and the environment, therefore, assumptions and judgements in risk analysis vary. Currently, the Environmental Management Program is developing and improving a framework to incorporate risk into the budget process and to link the budget, compliance requirements and risk reduction/pollution prevention activities.

  10. Optimization and determination of polycyclic aromatic hydrocarbons in biochar-based fertilizers.

    PubMed

    Chen, Ping; Zhou, Hui; Gan, Jay; Sun, Mingxing; Shang, Guofeng; Liu, Liang; Shen, Guoqing

    2015-03-01

    The agronomic benefit of biochar has attracted widespread attention to biochar-based fertilizers. However, the inevitable presence of polycyclic aromatic hydrocarbons in biochar is a matter of concern because of the health and ecological risks of these compounds. The strong adsorption of polycyclic aromatic hydrocarbons to biochar complicates their analysis and extraction from biochar-based fertilizers. In this study, we optimized and validated a method for determining the 16 priority polycyclic aromatic hydrocarbons in biochar-based fertilizers. Results showed that accelerated solvent extraction exhibited high extraction efficiency. Based on a Box-Behnken design with a triplicate central point, accelerated solvent extraction was used under the following optimal operational conditions: extraction temperature of 78°C, extraction time of 17 min, and two static cycles. The optimized method was validated by assessing the linearity of analysis, limit of detection, limit of quantification, recovery, and application to real samples. The results showed that the 16 polycyclic aromatic hydrocarbons exhibited good linearity, with a correlation coefficient of 0.996. The limits of detection varied between 0.001 (phenanthrene) and 0.021 mg/g (benzo[ghi]perylene), and the limits of quantification varied between 0.004 (phenanthrene) and 0.069 mg/g (benzo[ghi]perylene). The relative recoveries of the 16 polycyclic aromatic hydrocarbons were 70.26-102.99%. PMID:25546393

  11. Developing points-based risk-scoring systems in the presence of competing risks.

    PubMed

    Austin, Peter C; Lee, Douglas S; D'Agostino, Ralph B; Fine, Jason P

    2016-09-30

    Predicting the occurrence of an adverse event over time is an important issue in clinical medicine. Clinical prediction models and associated points-based risk-scoring systems are popular statistical methods for summarizing the relationship between a multivariable set of patient risk factors and the risk of the occurrence of an adverse event. Points-based risk-scoring systems are popular amongst physicians as they permit a rapid assessment of patient risk without the use of computers or other electronic devices. The use of such points-based risk-scoring systems facilitates evidence-based clinical decision making. There is a growing interest in cause-specific mortality and in non-fatal outcomes. However, when considering these types of outcomes, one must account for competing risks whose occurrence precludes the occurrence of the event of interest. We describe how points-based risk-scoring systems can be developed in the presence of competing events. We illustrate the application of these methods by developing risk-scoring systems for predicting cardiovascular mortality in patients hospitalized with acute myocardial infarction. Code in the R statistical programming language is provided for the implementation of the described methods. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:27197622

  12. Estimation of the effects of normal tissue sparing using equivalent uniform dose-based optimization.

    PubMed

    Senthilkumar, K; Maria Das, K J; Balasubramanian, K; Deka, A C; Patil, B R

    2016-01-01

    In this study, we intend to estimate the effects of normal tissue sparing between intensity modulated radiotherapy (IMRT) treatment plans generated with and without a dose volume (DV)-based physical cost function using equivalent uniform dose (EUD). Twenty prostate cancer patients were retrospectively selected for this study. For each patient, two IMRT plans were generated (i) EUD-based optimization with a DV-based physical cost function to control inhomogeneity (EUDWith DV) and (ii) EUD-based optimization without a DV-based physical cost function to allow inhomogeneity (EUDWithout DV). The generated plans were prescribed a dose of 72 Gy in 36 fractions to planning target volume (PTV). Mean dose, D30%, and D5% were evaluated for all organ at risk (OAR). Normal tissue complication probability was also calculated for all OARs using BioSuite software. The average volume of PTV for all patients was 103.02 ± 27 cm(3). The PTV mean dose for EUDWith DV plans was 73.67 ± 1.7 Gy, whereas for EUDWithout DV plans was 80.42 ± 2.7 Gy. It was found that PTV volume receiving dose more than 115% of prescription dose was negligible in EUDWith DV plans, whereas it was 28% in EUDWithout DV plans. In almost all dosimetric parameters evaluated, dose to OARs in EUDWith DV plans was higher than in EUDWithout DV plans. Allowing inhomogeneous dose (EUDWithout DV) inside the target would achieve better normal tissue sparing compared to homogenous dose distribution (EUDWith DV). Hence, this inhomogeneous dose could be intentionally dumped on the high-risk volume to achieve high local control. Therefore, it was concluded that EUD optimized plans offer added advantage of less OAR dose as well as selectively boosting dose to gross tumor volume. PMID:27217624

  13. Estimation of the effects of normal tissue sparing using equivalent uniform dose-based optimization

    PubMed Central

    Senthilkumar, K.; Maria Das, K. J.; Balasubramanian, K.; Deka, A. C.; Patil, B. R.

    2016-01-01

    In this study, we intend to estimate the effects of normal tissue sparing between intensity modulated radiotherapy (IMRT) treatment plans generated with and without a dose volume (DV)-based physical cost function using equivalent uniform dose (EUD). Twenty prostate cancer patients were retrospectively selected for this study. For each patient, two IMRT plans were generated (i) EUD-based optimization with a DV-based physical cost function to control inhomogeneity (EUDWith DV) and (ii) EUD-based optimization without a DV-based physical cost function to allow inhomogeneity (EUDWithout DV). The generated plans were prescribed a dose of 72 Gy in 36 fractions to planning target volume (PTV). Mean dose, D30%, and D5% were evaluated for all organ at risk (OAR). Normal tissue complication probability was also calculated for all OARs using BioSuite software. The average volume of PTV for all patients was 103.02 ± 27 cm3. The PTV mean dose for EUDWith DV plans was 73.67 ± 1.7 Gy, whereas for EUDWithout DV plans was 80.42 ± 2.7 Gy. It was found that PTV volume receiving dose more than 115% of prescription dose was negligible in EUDWith DV plans, whereas it was 28% in EUDWithout DV plans. In almost all dosimetric parameters evaluated, dose to OARs in EUDWith DV plans was higher than in EUDWithout DV plans. Allowing inhomogeneous dose (EUDWithout DV) inside the target would achieve better normal tissue sparing compared to homogenous dose distribution (EUDWith DV). Hence, this inhomogeneous dose could be intentionally dumped on the high-risk volume to achieve high local control. Therefore, it was concluded that EUD optimized plans offer added advantage of less OAR dose as well as selectively boosting dose to gross tumor volume. PMID:27217624

  14. Optimization-based mesh correction with volume and convexity constraints

    DOE PAGESBeta

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; Bochev, Pavel; Shashkov, Mikhail

    2016-02-24

    Here, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. Also, this volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problemmore » in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less

  15. Simulation-based optimal Bayesian experimental design for nonlinear systems

    SciTech Connect

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics.

  16. Weather forecast-based optimization of integrated energy systems.

    SciTech Connect

    Zavala, V. M.; Constantinescu, E. M.; Krause, T.; Anitescu, M.

    2009-03-01

    In this work, we establish an on-line optimization framework to exploit detailed weather forecast information in the operation of integrated energy systems, such as buildings and photovoltaic/wind hybrid systems. We first discuss how the use of traditional reactive operation strategies that neglect the future evolution of the ambient conditions can translate in high operating costs. To overcome this problem, we propose the use of a supervisory dynamic optimization strategy that can lead to more proactive and cost-effective operations. The strategy is based on the solution of a receding-horizon stochastic dynamic optimization problem. This permits the direct incorporation of economic objectives, statistical forecast information, and operational constraints. To obtain the weather forecast information, we employ a state-of-the-art forecasting model initialized with real meteorological data. The statistical ambient information is obtained from a set of realizations generated by the weather model executed in an operational setting. We present proof-of-concept simulation studies to demonstrate that the proposed framework can lead to significant savings (more than 18% reduction) in operating costs.

  17. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  18. Optimization-based mesh correction with volume and convexity constraints

    NASA Astrophysics Data System (ADS)

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; Bochev, Pavel; Shashkov, Mikhail

    2016-05-01

    We consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimization problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.

  19. On optimizing distance-based similarity search for biological databases.

    PubMed

    Mao, Rui; Xu, Weijia; Ramakrishnan, Smriti; Nuckolls, Glen; Miranker, Daniel P

    2005-01-01

    Similarity search leveraging distance-based index structures is increasingly being used for both multimedia and biological database applications. We consider distance-based indexing for three important biological data types, protein k-mers with the metric PAM model, DNA k-mers with Hamming distance and peptide fragmentation spectra with a pseudo-metric derived from cosine distance. To date, the primary driver of this research has been multimedia applications, where similarity functions are often Euclidean norms on high dimensional feature vectors. We develop results showing that the character of these biological workloads is different from multimedia workloads. In particular, they are not intrinsically very high dimensional, and deserving different optimization heuristics. Based on MVP-trees, we develop a pivot selection heuristic seeking centers and show it outperforms the most widely used corner seeking heuristic. Similarly, we develop a data partitioning approach sensitive to the actual data distribution in lieu of median splits. PMID:16447992

  20. Parallel performance optimizations on unstructured mesh-based simulations

    SciTech Connect

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-06-01

    This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.

  1. LAWRENCE RISK-BASED AIR SCREENING

    EPA Science Inventory

    The pediatric asthma rate in the city of Lawrence is the highest in the state of Massachusetts. This project will evaluate whether the cumulative risks due to the air pollution in Lawrence is contributing to the high asthma rates and other respiratory problems. The project will...

  2. An optimized hybrid encode based compression algorithm for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Miao, Zhuang; Feng, Weiyi; He, Weiji; Chen, Qian; Gu, Guohua

    2013-12-01

    Compression is a kernel procedure in hyperspectral image processing due to its massive data which will bring great difficulty in date storage and transmission. In this paper, a novel hyperspectral compression algorithm based on hybrid encoding which combines with the methods of the band optimized grouping and the wavelet transform is proposed. Given the characteristic of correlation coefficients between adjacent spectral bands, an optimized band grouping and reference frame selection method is first utilized to group bands adaptively. Then according to the band number of each group, the redundancy in the spatial and spectral domain is removed through the spatial domain entropy coding and the minimum residual based linear prediction method. Thus, embedded code streams are obtained by encoding the residual images using the improved embedded zerotree wavelet based SPIHT encode method. In the experments, hyperspectral images collected by the Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) were used to validate the performance of the proposed algorithm. The results show that the proposed approach achieves a good performance in reconstructed image quality and computation complexity.The average peak signal to noise ratio (PSNR) is increased by 0.21~0.81dB compared with other off-the-shelf algorithms under the same compression ratio.

  3. The biopharmaceutics risk assessment roadmap for optimizing clinical drug product performance.

    PubMed

    Selen, Arzu; Dickinson, Paul A; Müllertz, Anette; Crison, John R; Mistry, Hitesh B; Cruañes, Maria T; Martinez, Marilyn N; Lennernäs, Hans; Wigal, Tim L; Swinney, David C; Polli, James E; Serajuddin, Abu T M; Cook, Jack A; Dressman, Jennifer B

    2014-11-01

    The biopharmaceutics risk assessment roadmap (BioRAM) optimizes drug product development and performance by using therapy-driven target drug delivery profiles as a framework to achieve the desired therapeutic outcome. Hence, clinical relevance is directly built into early formulation development. Biopharmaceutics tools are used to identify and address potential challenges to optimize the drug product for patient benefit. For illustration, BioRAM is applied to four relatively common therapy-driven drug delivery scenarios: rapid therapeutic onset, multiphasic delivery, delayed therapeutic onset, and maintenance of target exposure. BioRAM considers the therapeutic target with the drug substance characteristics and enables collection of critical knowledge for development of a dosage form that can perform consistently for meeting the patient's needs. Accordingly, the key factors are identified and in vitro, in vivo, and in silico modeling and simulation techniques are used to elucidate the optimal drug delivery rate and pattern. BioRAM enables (1) feasibility assessment for the dosage form, (2) development and conduct of appropriate "learning and confirming" studies, (3) transparency in decision-making, (4) assurance of drug product quality during lifecycle management, and (5) development of robust linkages between the desired clinical outcome and the necessary product quality attributes for inclusion in the quality target product profile. PMID:25256402

  4. Adjoint-based optimization for understanding and suppressing jet noise

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan B.

    2011-08-01

    Advanced simulation tools, particularly large-eddy simulation techniques, are becoming capable of making quality predictions of jet noise for realistic nozzle geometries and at engineering relevant flow conditions. Increasing computer resources will be a key factor in improving these predictions still further. Quality prediction, however, is only a necessary condition for the use of such simulations in design optimization. Predictions do not themselves lead to quieter designs. They must be interpreted or harnessed in some way that leads to design improvements. As yet, such simulations have not yielded any simplifying principals that offer general design guidance. The turbulence mechanisms leading to jet noise remain poorly described in their complexity. In this light, we have implemented and demonstrated an aeroacoustic adjoint-based optimization technique that automatically calculates gradients that point the direction in which to adjust controls in order to improve designs. This is done with only a single flow solutions and a solution of an adjoint system, which is solved at computational cost comparable to that for the flow. Optimization requires iterations, but having the gradient information provided via the adjoint accelerates convergence in a manner that is insensitive to the number of parameters to be optimized. This paper, which follows from a presentation at the 2010 IUTAM Symposium on Computational Aero-Acoustics for Aircraft Noise Prediction, reviews recent and ongoing efforts by the author and co-workers. It provides a new formulation of the basic approach and demonstrates the approach on a series of model flows, culminating with a preliminary result for a turbulent jet.

  5. Vehicle Shield Optimization and Risk Assessment for Future Human Space Missions

    NASA Technical Reports Server (NTRS)

    Nounu, Hatem N.; Kim, Myung-Hee; Cucinotta, Francis A.

    2011-01-01

    As the focus of future human space missions shifts to destinations beyond low Earth orbit such as Near Earth Objects (NEO), the moon, or Mars, risks associated with extended stay in hostile radiation environment need to be well understood and assessed. Since future spacecrafts designs and shapes are evolving continuous assessments of shielding and radiation risks are needed. In this study, we use a predictive software capability that calculates risks to humans inside a spacecraft prototype that builds on previous designs. The software uses CAD software Pro/Engineer and Fishbowl tool kit to quantify radiation shielding provided by the spacecraft geometry by calculating the areal density seen at a certain point, dose point, inside the spacecraft. Shielding results are used by NASA-developed software, BRYNTRN, to quantify organ doses received in a human body located in the vehicle in case of solar particle event (SPE) during such prolonged space missions. Organ doses are used to quantify risks on astronauts health and life using NASA Space Cancer Model. The software can also locate shielding weak points-hotspots-on the spacecraft s outer surface. This capability is used to reinforce weak areas in the design. Results of shielding optimization and risk calculation on an exploration vehicle design for missions of 6 months and 30 months are provided in this study. Vehicle capsule is made of aluminum shell that includes main cabin and airlock. The capsule contains 5 sets of racks that surround working and living areas. Water shelter is provided in the main cabin of the vehicle to enhance shielding in case of SPE.

  6. Changing the patterns of failure for high-risk prostate cancer patients by optimizing local control

    SciTech Connect

    Stock, Richard G. . E-mail: richard.stock@msnyuhealth.org; Ho, Alice; Cesaretti, Jamie A.; Stone, Nelson N.

    2006-10-01

    Purpose: Standard therapies for high-risk prostate cancer have resulted in suboptimal outcomes with both local and distant failures. Prostate-specific antigen (PSA) and distant metastases rates as well as biopsy outcomes are reported after a regimen of trimodality therapy with hormonal, radioactive seed, and external beam radiation therapy to demonstrate how patterns of failure are changed when local control is optimized. Methods and Materials: From 1994 to 2003, a total of 360 patients with high-risk prostate cancer were treated with trimodality therapy. Patients were defined as being at high risk if they possessed at least one of the following high-risk features: Gleason score 8 to 10, PSA >20, clinical stage t2c to t3, or two or more intermediate risk features: Gleason score 7, PSA >10 to 20, or stage t2b. Patients were followed for a median of 4.25 years (range, 2 to 10 years). Results: The actuarial 7-year freedom from PSA failure and freedom from distant metastases (FFDM) rates were 83% and 89% respectively. Patients (n = 51) developing PSA failure exhibited aggressive disease behavior with short PSA doubling times (median, 5 months) and a 7-year freedom from distant metastases rate of 48%. Local control was high. The last posttreatment biopsy results were negative in 97% of cases (68 of 70 patients). In multivariate analysis, only PSA >20 predicted biochemical failure (p = 0.04), and only seminal vesicle status predicted developing distant failure (p = 0.01). Conclusions: Trimodality therapy results in excellent local control that alters patterns of failure, resulting in similar actuarial biochemical and distant failure rates. Most failures appear to be distant and exhibit biologically aggressive behavior.

  7. Optimal Cutoff Points of Anthropometric Parameters to Identify High Coronary Heart Disease Risk in Korean Adults

    PubMed Central

    2016-01-01

    Several published studies have reported the need to change the cutoff points of anthropometric indices for obesity. We therefore conducted a cross-sectional study to estimate anthropometric cutoff points predicting high coronary heart disease (CHD) risk in Korean adults. We analyzed the Korean National Health and Nutrition Examination Survey data from 2007 to 2010. A total of 21,399 subjects aged 20 to 79 yr were included in this study (9,204 men and 12,195 women). We calculated the 10-yr Framingham coronary heart disease risk score for all individuals. We then estimated receiver-operating characteristic (ROC) curves for body mass index (BMI), waist circumference, and waist-to-height ratio to predict a 10-yr CHD risk of 20% or more. For sensitivity analysis, we conducted the same analysis for a 10-yr CHD risk of 10% or more. For a CHD risk of 20% or more, the area under the curve of waist-to-height ratio was the highest, followed by waist circumference and BMI. The optimal cutoff points in men and women were 22.7 kg/m2 and 23.3 kg/m2 for BMI, 83.2 cm and 79.7 cm for waist circumference, and 0.50 and 0.52 for waist-to-height ratio, respectively. In sensitivity analysis, the results were the same as those reported above except for BMI in women. Our results support the re-classification of anthropometric indices and suggest the clinical use of waist-to-height ratio as a marker for obesity in Korean adults. PMID:26770039

  8. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  9. An Improved Teaching-Learning-Based Optimization with the Social Character of PSO for Global Optimization.

    PubMed

    Zou, Feng; Chen, Debao; Wang, Jiangtao

    2016-01-01

    An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157

  10. An Improved Teaching-Learning-Based Optimization with the Social Character of PSO for Global Optimization

    PubMed Central

    Zou, Feng; Chen, Debao; Wang, Jiangtao

    2016-01-01

    An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher's behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods. PMID:27057157

  11. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization

    PubMed Central

    LEE, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study. PMID:26027708

  12. Optimal multi-floor plant layout based on the mathematical programming and particle swarm optimization.

    PubMed

    Lee, Chang Jun

    2015-01-01

    In the fields of researches associated with plant layout optimization, the main goal is to minimize the costs of pipelines and pumping between connecting equipment under various constraints. However, what is the lacking of considerations in previous researches is to transform various heuristics or safety regulations into mathematical equations. For example, proper safety distances between equipments have to be complied for preventing dangerous accidents on a complex plant. Moreover, most researches have handled single-floor plant. However, many multi-floor plants have been constructed for the last decade. Therefore, the proper algorithm handling various regulations and multi-floor plant should be developed. In this study, the Mixed Integer Non-Linear Programming (MINLP) problem including safety distances, maintenance spaces, etc. is suggested based on mathematical equations. The objective function is a summation of pipeline and pumping costs. Also, various safety and maintenance issues are transformed into inequality or equality constraints. However, it is really hard to solve this problem due to complex nonlinear constraints. Thus, it is impossible to use conventional MINLP solvers using derivatives of equations. In this study, the Particle Swarm Optimization (PSO) technique is employed. The ethylene oxide plant is illustrated to verify the efficacy of this study. PMID:26027708

  13. Exploring sub-optimal use of an electronic risk assessment tool for venous thromboembolism.

    PubMed

    Baysari, Melissa T; Jackson, Nicola; Ramasamy, Sheena; Santiago, Priscila; Xiong, Juan; Westbrook, Johanna; Omari, Abdullah; Day, Richard O

    2016-07-01

    International guidelines and consensus groups recommend using a risk assessment tool (RAT) to assess Venous Thromboembolism (VTE) risk prior to the prescription of prophylaxis. We set out to examine how an electronic RAT was being used (i.e. if by the right clinician, at the right time, for the right purpose) and to identify factors influencing utilization of the RAT. A sample of 112 risk assessments was audited and 12 prescribers were interviewed. The RAT was used as intended in only 40 (35.7%) cases (i.e. completed by a doctor within 24 h of admission, prior to the prescription of prophylaxis). We identified several reasons for sub-optimal use of the RAT, including beliefs about the need for a RAT, poor awareness of the tool, and poor RAT design. If a user-centred approach had been adopted, it is likely that a RAT would not have been implemented or that problematic design issues would have been identified. PMID:26995037

  14. Risk-based zoning for urbanizing floodplains.

    PubMed

    Porse, Erik

    2014-01-01

    Urban floodplain development brings economic benefits and enhanced flood risks. Rapidly growing cities must often balance the economic benefits and increased risks of floodplain settlement. Planning can provide multiple flood mitigation and environmental benefits by combining traditional structural measures such as levees, increasingly popular landscape and design features (green infrastructure), and non-structural measures such as zoning. Flexibility in both structural and non-structural options, including zoning procedures, can reduce flood risks. This paper presents a linear programming formulation to assess cost-effective urban floodplain development decisions that consider benefits and costs of development along with expected flood damages. It uses a probabilistic approach to identify combinations of land-use allocations (residential and commercial development, flood channels, distributed runoff management) and zoning regulations (development zones in channel) to maximize benefits. The model is applied to a floodplain planning analysis for an urbanizing region in the Baja Sur peninsula of Mexico. The analysis demonstrates how (1) economic benefits drive floodplain development, (2) flexible zoning can improve economic returns, and (3) cities can use landscapes, enhanced by technology and design, to manage floods. The framework can incorporate additional green infrastructure benefits, and bridges typical disciplinary gaps for planning and engineering. PMID:25500464

  15. Risk Assessment and Hierarchical Risk Management of Enterprises in Chemical Industrial Parks Based on Catastrophe Theory

    PubMed Central

    Chen, Yu; Song, Guobao; Yang, Fenglin; Zhang, Shushen; Zhang, Yun; Liu, Zhenyu

    2012-01-01

    According to risk systems theory and the characteristics of the chemical industry, an index system was established for risk assessment of enterprises in chemical industrial parks (CIPs) based on the inherent risk of the source, effectiveness of the prevention and control mechanism, and vulnerability of the receptor. A comprehensive risk assessment method based on catastrophe theory was then proposed and used to analyze the risk levels of ten major chemical enterprises in the Songmu Island CIP, China. According to the principle of equal distribution function, the chemical enterprise risk level was divided into the following five levels: 1.0 (very safe), 0.8 (safe), 0.6 (generally recognized as safe, GRAS), 0.4 (unsafe), 0.2 (very unsafe). The results revealed five enterprises (50%) with an unsafe risk level, and another five enterprises (50%) at the generally recognized as safe risk level. This method solves the multi-objective evaluation and decision-making problem. Additionally, this method involves simple calculations and provides an effective technique for risk assessment and hierarchical risk management of enterprises in CIPs. PMID:23208298

  16. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  17. Optimal scheduling of multispacecraft refueling based on cooperative maneuver

    NASA Astrophysics Data System (ADS)

    Du, Bingxiao; Zhao, Yong; Dutta, Atri; Yu, Jing; Chen, Xiaoqian

    2015-06-01

    The scheduling of multispacecraft refueling based on cooperative maneuver in a circular orbit is studied in this paper. In the proposed scheme, both of the single service vehicle (SSV) and the target satellite (TS) perform the orbital transfer to complete the rendezvous at the service places. When a TS is refueled by the SSV, it returns to its original working slot to continue its normal function. In this way, the SSV refuels the TS one by one. A MINLP model for the mission is first built, then a two-level hybrid optimization approach is proposed for determining the strategy, and the optimal solution is successfully obtained by using an algorithm which is a combination of Multi-island Genetic Algorithm and Sequential Quadratic Programming. Results show the cooperative strategy can save around 27.31% in fuel, compared with the non-cooperative strategy in which only the SSV would maneuver in the example considered. Three conclusions can be drawn based on the numerical simulations for the evenly distributed constellations. Firstly, in the cooperative strategy one of the service positions is the initial location of the SSV, other service positions are also target slots, i.e. not all targets need to maneuver, and there may be more than one TS serviced in a given service position. Secondly, the efficiency gains for the cooperative strategy are higher for larger transferred fuel mass. Thirdly, the cooperative strategy is less efficient for targets with larger spacecraft mass.

  18. Optimal pattern distributions in Rete-based production systems

    NASA Technical Reports Server (NTRS)

    Scott, Stephen L.

    1994-01-01

    Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.

  19. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  20. [Study on the land use optimization based on PPI].

    PubMed

    Wu, Xiao-Feng; Li, Ting

    2012-03-01

    Land use type and managing method which is greatly influenced by human activities, is one of the most important factors of non-point pollution. Based on the collection and analysis of non-point pollution control methods and the concept of the three ecological fronts, 9 land use optimized scenarios were designed according to rationality analysis of the current land use situation in the 3 typed small watersheds in Miyun reservoir basin. Take Caojialu watershed for example to analyze and compare the influence to environment of different scenarios based on potential pollution index (PPI) and river section potential pollution index (R-PPI) and the best combination scenario was found. Land use scenario designing and comparison on basis of PPI and R-PPI could help to find the best combination scenario of land use type and managing method, to optimize space distribution and managing methods of land use in basin, to reduce soil erosion and to provide powerful support to formulation of land use planning and pollution control project. PMID:22624396

  1. Research on Taxiway Path Optimization Based on Conflict Detection.

    PubMed

    Zhou, Hang; Jiang, Xinxin

    2015-01-01

    Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency. PMID:26226485

  2. Research on Taxiway Path Optimization Based on Conflict Detection

    PubMed Central

    Zhou, Hang; Jiang, Xinxin

    2015-01-01

    Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency. PMID:26226485

  3. Discrete-Time ARMAv Model-Based Optimal Sensor Placement

    SciTech Connect

    Song Wei; Dyke, Shirley J.

    2008-07-08

    This paper concentrates on the optimal sensor placement problem in ambient vibration based structural health monitoring. More specifically, the paper examines the covariance of estimated parameters during system identification using auto-regressive and moving average vector (ARMAv) model. By utilizing the discrete-time steady state Kalman filter, this paper realizes the structure's finite element (FE) model under broad-band white noise excitations using an ARMAv model. Based on the asymptotic distribution of the parameter estimates of the ARMAv model, both a theoretical closed form and a numerical estimate form of the covariance of the estimates are obtained. Introducing the information entropy (differential entropy) measure, as well as various matrix norms, this paper attempts to find a reasonable measure to the uncertainties embedded in the ARMAv model estimates. Thus, it is possible to select the optimal sensor placement that would lead to the smallest uncertainties during the ARMAv identification process. Two numerical examples are provided to demonstrate the methodology and compare the sensor placement results upon various measures.

  4. CFD-Based Design Optimization Tool Developed for Subsonic Inlet

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The traditional approach to the design of engine inlets for commercial transport aircraft is a tedious process that ends with a less-than-optimum design. With the advent of high-speed computers and the availability of more accurate and reliable computational fluid dynamics (CFD) solvers, numerical optimization processes can effectively be used to design an aerodynamic inlet lip that enhances engine performance. The designers' experience at Boeing Corporation showed that for a peak Mach number on the inlet surface beyond some upper limit, the performance of the engine degrades excessively. Thus, our objective was to optimize efficiency (minimize the peak Mach number) at maximum cruise without compromising performance at other operating conditions. Using a CFD code NPARC, the NASA Lewis Research Center, in collaboration with Boeing, developed an integrated procedure at Lewis to find the optimum shape of a subsonic inlet lip and a numerical optimization code, ADS. We used a GRAPE-based three-dimensional grid generator to help automate the optimization procedure. The inlet lip shape at the crown and the keel was described as a superellipse, and the superellipse exponents and radii ratios were considered as design variables. Three operating conditions: cruise, takeoff, and rolling takeoff, were considered in this study. Three-dimensional Euler computations were carried out to obtain the flow field. At the initial design, the peak Mach numbers for maximum cruise, takeoff, and rolling takeoff conditions were 0.88, 1.772, and 1.61, respectively. The acceptable upper limits on the takeoff and rolling takeoff Mach numbers were 1.55 and 1.45. Since the initial design provided by Boeing was found to be optimum with respect to the maximum cruise condition, the sum of the peak Mach numbers at takeoff and rolling takeoff were minimized in the current study while the maximum cruise Mach number was constrained to be close to that at the existing design. With this objective, the

  5. Design of ultra-compact triplexer with function-expansion based topology optimization.

    PubMed

    Zhang, Zejun; Tsuji, Yasuhide; Yasui, Takashi; Hirayama, Koichi

    2015-02-23

    In this paper, in order to optimize wavelength selective photonic devices using the function-expansion-based topology optimization method, several expansion functions are considered and the influence on the optimized structure based on each expansion function was investigated. Although the Fourier series is conventionally used in the function-expansion-based method, the optimized structure sometimes has a complicated refractive index distribution. Therefore, we employed a sampling function and a pyramid function to obtain a simpler structure through the optimal design. A triplexer was designed by using our method, and the comparison between the optimized structures based on the three expansion functions was also discussed in detail. PMID:25836433

  6. Spectrum reconstruction based on the constrained optimal linear inverse methods.

    PubMed

    Ren, Wenyi; Zhang, Chunmin; Mu, Tingkui; Dai, Haishan

    2012-07-01

    The dispersion effect of birefringent material results in spectrally varying Nyquist frequency for the Fourier transform spectrometer based on birefringent prism. Correct spectral information cannot be retrieved from the observed interferogram if the dispersion effect is not appropriately compensated. Some methods, such as nonuniform fast Fourier transforms and compensation method, were proposed to reconstruct the spectrum. In this Letter, an alternative constrained spectrum reconstruction method is suggested for the stationary polarization interference imaging spectrometer (SPIIS) based on the Savart polariscope. In the theoretical model of the interferogram, the noise and the total measurement error are included, and the spectrum reconstruction is performed by using the constrained optimal linear inverse methods. From numerical simulation, it is found that the proposed method is much more effective and robust than the nonconstrained spectrum reconstruction method proposed by Jian, and provides a useful spectrum reconstruction approach for the SPIIS. PMID:22743461

  7. Vision-based coaching: optimizing resources for leader development

    PubMed Central

    Passarelli, Angela M.

    2015-01-01

    Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the benefits of the individual’s personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion–orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed. PMID:25926803

  8. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization

    PubMed Central

    Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129

  9. Rule-based circuit optimization for CMOS VLSI

    SciTech Connect

    Lai, F.

    1987-01-01

    A closed-loop design system iJADE was developed in Franz LISP. iJADE is a hierarchical CMOS VLSI circuit optimizer. Using a switch-level timing simulator and a timing analyzer, the program pinpoints the critical paths. The path-delay reduction algorithms and a rule-based expert system are then applied to adjust transistor sizes such that the speed of the circuit can be improved while keeping constraints satisfied. iJADE is also capable of detecting and correcting the timing errors of synchronous circuits. The circuit is described in SPICE-like input format, and then partitioned into blocks. Delays are computed on a block-by-block basis hierarchically, using a simple model based on input rise time, block type, and output load.

  10. Vision-based coaching: optimizing resources for leader development.

    PubMed

    Passarelli, Angela M

    2015-01-01

    Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader's development fail to leverage the benefits of the individual's personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader's personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader's identity, increased vitality, activation of learning goals, and a promotion-orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed. PMID:25926803

  11. Optimizing bulk milk dioxin monitoring based on costs and effectiveness.

    PubMed

    Lascano-Alcoser, V H; Velthuis, A G J; van der Fels-Klerx, H J; Hoogenboom, L A P; Oude Lansink, A G J M

    2013-07-01

    concentration equal to the EC maximum level. This study shows that the effectiveness of finding an incident depends not only on the ratio at which, for testing, collected truck samples are mixed into a pooled sample (aiming at detecting certain concentration), but also the number of collected truck samples. In conclusion, the optimal cost-effective monitoring depends on the number of contaminated farms and the concentration aimed at detection. The models and study results offer quantitative support to risk managers of food industries and food safety authorities. PMID:23628245

  12. A novel surrogate-based approach for optimal design of electromagnetic-based circuits

    NASA Astrophysics Data System (ADS)

    Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.

    2016-02-01

    A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.

  13. Bioassay-based risk assessment of complex mixtures

    SciTech Connect

    Donnelly, K.C.; Huebner, H.J.

    1996-12-31

    The baseline risk assessment often plays an integral role in various decision-making processes at Superfund sites. The present study reports on risk characterizations prepared for seven complex mixtures using biological and chemical analysis. Three of the samples (A, B, and C) were complex mixtures of polycyclic aromatic hydrocarbons (PAHs) extracted from coal tar; while four samples extracted from munitions-contaminated soil contained primarily nitroaromatic hydrocarbons. The chemical-based risk assessment ranked sample C as least toxic, while the risk associated with samples A and B was approximately equal. The microbial bioassay was in general agreement for the coal tar samples. The weighted activity of the coal tar extracts in Salmonella was 4,960 for sample C, and 162,000 and 206,000 for samples A and B, respectively. The bacterial mutagenicity of 2,4,6-trinitrotoluene contaminated soils exhibited an indirect correlation with chemical-based risk assessment. The aqueous extract of sample 004 induced 1,292 net revertants in Salmonella, while the estimated risk to ingestion and dermal adsorption was 2E-9. The data indicate that the chemical-based risk assessment accurately predicted the genotoxicity of the PAHs, while the accuracy of the risk assessment for munitions contaminated soils was limited due to the presence of metabolites of TNT degradation. The biological tests used in this research provide a valuable compliment to chemical analysis for characterizing the genotoxic risk of complex mixtures.

  14. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    PubMed Central

    Hashim, Rathiah; Noor Elaiza, Abd Khalid; Irtaza, Aun

    2014-01-01

    One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF) coupled with support vector machine (SVM) has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO). The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations. PMID:25121136

  15. Research needs for risk-informed, performance-based regulations

    SciTech Connect

    Thadani, A.C.

    1997-01-01

    This article summarizes the activities of the Office of Research of the NRC, both from a historical aspect as well as it applies to the application of risk-based decision making. The office has been actively involved in problems related to understanding risks related to core accidents, to understanding the problem of aging of reactor components and materials from years of service, and toward the understanding and analysis of severe accidents. In addition new policy statements regarding the role of risk assessment in regulatory applications has given focus for the need of further work. The NRC has used risk assessment in regulatory questions in the past but in a fairly ad hoc sort of manner. The new policies will clearly require a better defined application of risk assessment, and help for people evaluating applications in judging the applicability of such applications when a component of them is based on risk-based decision making. To address this, standard review plans are being prepared to serve as guides for such questions. In addition, with regulatory decisions being allowed to be based upon risk-based decisions, it is necessary to have an adequate data base prepared, and made publically available, to support such a position.

  16. Risk-based objectives for the allocation of chemical, biological, and radiological air emissions sensors.

    PubMed

    Lambert, James H; Farrington, Mark W

    2006-12-01

    This article addresses the problem of allocating devices for localized hazard protection across a region. Each identical device provides only local protection, and the devices serve localities that are exposed to nonidentical intensities of hazard. A method for seeking the optimal allocation Policy Decisions is described, highlighting the potentially competing objectives of maximizing local risk reductions and coverage risk reductions. The metric for local risk reductions is the sum of the local economic risks avoided. The metric for coverage risk reductions is adapted from the p-median problem and equal to the sum of squares of the distances from all unserved localities to their closest associated served locality. Three graphical techniques for interpreting the Policy Decisions are presented. The three linked graphical techniques are applied serially. The first technique identifies Policy Decisions that are nearly Pareto optimal. The second identifies locations where sensor placements are most justified, based on a risk-cost-benefit analysis under uncertainty. The third displays the decision space for any particular policy decision. The method is illustrated in an application to chemical, biological, and/or radiological weapon sensor placement, but has implications for disaster preparedness, transportation safety, and other arenas of public safety. PMID:17184404

  17. Experience with the implementation of a risk-based ISI program and inspection qualification

    SciTech Connect

    Chapman, O.J.V.

    1996-12-01

    Rolls Royce and Associates (RRA) are the Design Authority (DA) for Nuclear Steam Raising Plant (NSRP) used for the Royal Naval Nuclear Fleet. Over the past seven years RRA, with support from the Ministry of Defense, has developed and implemented a risk based in-service inspection (RBISI) strategy for the NSRP. Having used risk as a means of optimizing where to inspect, an inspection qualification (IQ) process has now been put in place to ensure that proposed inspections deliver the expected gains assumed. This qualification process follows very closely that currently being put forward by the European Network on Inspection Qualification (ENIQ).

  18. MAOA-L carriers are better at making optimal financial decisions under risk.

    PubMed

    Frydman, Cary; Camerer, Colin; Bossaerts, Peter; Rangel, Antonio

    2011-07-01

    Genes can affect behaviour towards risks through at least two distinct neurocomputational mechanisms: they may affect the value assigned to different risky options, or they may affect the way in which the brain adjudicates between options based on their value. We combined methods from neuroeconomics and behavioural genetics to investigate the impact that the genes encoding for monoamine oxidase-A (MAOA), the serotonin transporter (5-HTT) and the dopamine D4 receptor (DRD4) have on these two computations. Consistent with previous literature, we found that carriers of the MAOA-L polymorphism were more likely to take financial risks. Our computational choice model, rooted in established decision theory, showed that MAOA-L carriers exhibited such behaviour because they are able to make better financial decisions under risk, and not because they are more impulsive. In contrast, we found no behavioural or computational differences among the 5-HTT and DRD4 polymorphisms. PMID:21147794

  19. Microwave-based medical diagnosis using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Modiri, Arezoo

    This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level

  20. Graph-based optimization algorithm and software on kidney exchanges.

    PubMed

    Chen, Yanhua; Li, Yijiang; Kalbfleisch, John D; Zhou, Yan; Leichtman, Alan; Song, Peter X-K

    2012-07-01

    Kidney transplantation is typically the most effective treatment for patients with end-stage renal disease. However, the supply of kidneys is far short of the fast-growing demand. Kidney paired donation (KPD) programs provide an innovative approach for increasing the number of available kidneys. In a KPD program, willing but incompatible donor-candidate pairs may exchange donor organs to achieve mutual benefit. Recently, research on exchanges initiated by altruistic donors (ADs) has attracted great attention because the resultant organ exchange mechanisms offer advantages that increase the effectiveness of KPD programs. Currently, most KPD programs focus on rule-based strategies of prioritizing kidney donation. In this paper, we consider and compare two graph-based organ allocation algorithms to optimize an outcome-based strategy defined by the overall expected utility of kidney exchanges in a KPD program with both incompatible pairs and ADs. We develop an interactive software-based decision support system to model, monitor, and visualize a conceptual KPD program, which aims to assist clinicians in the evaluation of different kidney allocation strategies. Using this system, we demonstrate empirically that an outcome-based strategy for kidney exchanges leads to improvement in both the quantity and quality of kidney transplantation through comprehensive simulation experiments. PMID:22542649

  1. Biological Bases of Space Radiation Risk

    NASA Technical Reports Server (NTRS)

    1997-01-01

    In this session, Session JP4, the discussion focuses on the following topics: Hematopoiesis Dynamics in Irradiated Mammals, Mathematical Modeling; Estimating Health Risks in Space from Galactic Cosmic Rays; Failure of Heavy Ions to Affect Physiological Integrity of the Corneal Endothelial Monolayer; Application of an Unbiased Two-Gel CDNA Library Screening Method to Expression Monitoring of Genes in Irradiated Versus Control Cells; Detection of Radiation-Induced DNA Strand Breaks in Mammalian Cells By Enzymatic Post-Labeling; Evaluation of Bleomycin-Induced Chromosome Aberrations Under Microgravity Conditions in Human Lymphocytes, Using "Fish" Techniques; Technical Description of the Space Exposure Biology Assembly Seba on ISS; and Cytogenetic Research in Biological Dosimetry.

  2. A superstructure-based optimal synthesis of PSA cycles for post-combustion CO2 capture

    SciTech Connect

    Agarwal, A.; Biegler, L.; Zitney, S.

    2010-07-01

    Recent developments have shown pressure/vacuum swing adsorption (PSA/VSA) to be a promising option to effectively capture CO2 from flue gas streams. In most commercial PSA cycles, the weakly adsorbed component in the mixture is the desired product, and enriching the strongly adsorbed CO2 is not a concern. On the other hand, it is necessary to concentrate CO2 to high purity to reduce CO2 sequestration costs and minimize safety and environmental risks. Thus, it is necessary to develop PSA processes specifically targeted to obtain pure strongly adsorbed component. A multitude of PSA/VSA cycles have been developed in the literature for CO2 capture from feedstocks low in CO2 concentration. However, no systematic methodology has been suggested to develop, evaluate, and optimize PSA cycles for high purity CO2 capture. This study presents a systematic optimization-based formulation to synthesize novel PSA cycles for a given application. In particular, a novel PSA superstructure is presented to design optimal PSA cycle configurations and evaluate CO2 capture strategies. The superstructure is rich enough to predict a number of different PSA operating steps. The bed connections in the superstructure are governed by time-dependent control variables, which can be varied to realize most PSA operating steps. An optimal sequence of operating steps is achieved through the formulation of an optimal control problem with the partial differential and algebraic equations of the PSA system and the cyclic steady state condition. Large-scale optimization capabilities have enabled us to adopt a complete discretization methodology to solve the optimal control problem as a largescale nonlinear program, using the nonlinear optimization solver IPOPT. The superstructure approach is demonstrated for case studies related to post-combustion CO2 capture. In particular, optimal PSA cycles were synthesized, which maximize CO2 recovery for a given purity, and minimize overall power consumption. The

  3. Developing a risk-based air quality health index

    NASA Astrophysics Data System (ADS)

    Wong, Tze Wai; Tam, Wilson Wai San; Yu, Ignatius Tak Sun; Lau, Alexis Kai Hon; Pang, Sik Wing; Wong, Andromeda H. S.

    2013-09-01

    We developed a risk-based, multi-pollutant air quality health index (AQHI) reporting system in Hong Kong, based on the Canadian approach. We performed time series studies to obtain the relative risks of hospital admissions for respiratory and cardiovascular diseases associated with four air pollutants: sulphur dioxide, nitrogen dioxide, ozone, and particulate matter with an aerodynamic diameter less than 10 μm (PM10). We then calculated the sum of excess risks of the hospital admissions associated with these air pollutants. The cut-off points of the summed excess risk, for the issuance of different health warnings, were based on the concentrations of these pollutants recommended as short-term Air Quality Guidelines by the World Health Organization. The excess risks were adjusted downwards for young children and the elderly. Health risk was grouped into five categories and sub-divided into eleven bands, with equal increments in excess risk from band 1 up to band 10 (the 11th band is 'band 10+'). We developed health warning messages for the general public, including at-risk groups: young children, the elderly, and people with pre-existing cardiac or respiratory diseases. The new system addressed two major shortcomings of the current standard-based system; namely, the time lag between a sudden rise in air pollutant concentrations and the issue of a health warning, and the reliance on one dominant pollutant to calculate the index. Hence, the AQHI represents an improvement over Hong Kong's existing air pollution index.

  4. A school-based intervention for diabetes risk reduction

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We examined the effects of a multicomponent, school-based program, addressing risk factors for diabetes among children whose race, or ethnic group and socioeconomic status placed them at high risk for obesity and type 2 diabetes. Using a cluster design, we randomly assigned 42 schools to either a mu...

  5. Risk-Based Educational Accountability in Dutch Primary Education

    ERIC Educational Resources Information Center

    Timmermans, A. C.; de Wolf, I. F.; Bosker, R. J.; Doolaard, S.

    2015-01-01

    A recent development in educational accountability is a risk-based approach, in which intensity and frequency of school inspections vary across schools to make educational accountability more efficient and effective by enabling inspectorates to focus on organizations at risk. Characteristics relevant in predicting which schools are "at risk…

  6. Role of Context in Risk-Based Reasoning

    ERIC Educational Resources Information Center

    Pratt, Dave; Ainley, Janet; Kent, Phillip; Levinson, Ralph; Yogui, Cristina; Kapadia, Ramesh

    2011-01-01

    In this article we report the influence of contextual factors on mathematics and science teachers' reasoning in risk-based decision-making. We examine previous research that presents judgments of risk as being subjectively influenced by contextual factors and other research that explores the role of context in mathematical problem-solving. Our own…

  7. Nonlinear model predictive control based on collective neurodynamic optimization.

    PubMed

    Yan, Zheng; Wang, Jun

    2015-04-01

    In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach. PMID:25608315

  8. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  9. Optimization of hyaluronan-based eye drop formulations.

    PubMed

    Salzillo, Rosanna; Schiraldi, Chiara; Corsuto, Luisana; D'Agostino, Antonella; Filosa, Rosanna; De Rosa, Mario; La Gatta, Annalisa

    2016-11-20

    Hyaluronan (HA) is frequently incorporated in eye drops to extend the pre-corneal residence time, due to its viscosifying and mucoadhesive properties. Hydrodynamic and rheological evaluations of commercial products are first accomplished revealing molecular weights varying from about 360 to about 1200kDa and viscosity values in the range 3.7-24.2mPa s. The latter suggest that most products could be optimized towards resistance to drainage from the ocular surface. Then, a study aiming to maximize the viscosity and mucoadhesiveness of HA-based preparations is performed. The effect of polymer chain length and concentration is investigated. For the whole range of molecular weights encountered in commercial products, the concentration maximizing performance is identified. Such concentration varies from 0.3 (wt%) for a 1100kDa HA up to 1.0 (wt%) for a 250kDa HA, which is 3-fold higher than the highest concentration on the market. The viscosity and mucoadhesion profiles of optimized formulations are superior than commercial products, especially under conditions simulating in vivo blinking. Thus longer retention on the corneal epithelium can be predicted. An enhanced capacity to protect corneal porcine epithelial cells from dehydration is also demonstrated in vitro. Overall, the results predict formulations with improved efficacy. PMID:27561497

  10. [Optimal allocation of irrigation water resources based on systematical strategy].

    PubMed

    Cheng, Shuai; Zhang, Shu-qing

    2015-01-01

    With the development of the society and economy, as well as the rapid increase of population, more and more water is needed by human, which intensified the shortage of water resources. The scarcity of water resources and growing competition of water in different water use sectors reduce water availability for irrigation, so it is significant to plan and manage irrigation water resources scientifically and reasonably for improving water use efficiency (WUE) and ensuring food security. Many investigations indicate that WUE can be increased by optimization of water use. However, present studies focused primarily on a particular aspect or scale, which lack systematic analysis on the problem of irrigation water allocation. By summarizing previous related studies, especially those based on intelligent algorithms, this article proposed a multi-level, multi-scale framework for allocating irrigation water, and illustrated the basic theory of each component of the framework. Systematical strategy of optimal irrigation water allocation can not only control the total volume of irrigation water on the time scale, but also reduce water loss on the spatial scale. It could provide scientific basis and technical support for improving the irrigation water management level and ensuring the food security. PMID:25985685

  11. CFD-Based Design Optimization for Single Element Rocket Injector

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar; Tucker, Kevin; Papila, Nilay; Shyy, Wei

    2003-01-01

    To develop future Reusable Launch Vehicle concepts, we have conducted design optimization for a single element rocket injector, with overall goals of improving reliability and performance while reducing cost. Computational solutions based on the Navier-Stokes equations, finite rate chemistry, and the k-E turbulence closure are generated with design of experiment techniques, and the response surface method is employed as the optimization tool. The design considerations are guided by four design objectives motivated by the consideration in both performance and life, namely, the maximum temperature on the oxidizer post tip, the maximum temperature on the injector face, the adiabatic wall temperature, and the length of the combustion zone. Four design variables are selected, namely, H2 flow angle, H2 and O2 flow areas with fixed flow rates, and O2 post tip thickness. In addition to establishing optimum designs by varying emphasis on the individual objectives, better insight into the interplay between design variables and their impact on the design objectives is gained. The investigation indicates that improvement in performance or life comes at the cost of the other. Best compromise is obtained when improvements in both performance and life are given equal importance.

  12. Tree-Based Visualization and Optimization for Image Collection.

    PubMed

    Han, Xintong; Zhang, Chongyang; Lin, Weiyao; Xu, Mingliang; Sheng, Bin; Mei, Tao

    2016-06-01

    The visualization of an image collection is the process of displaying a collection of images on a screen under some specific layout requirements. This paper focuses on an important problem that is not well addressed by the previous methods: visualizing image collections into arbitrary layout shapes while arranging images according to user-defined semantic or visual correlations (e.g., color or object category). To this end, we first propose a property-based tree construction scheme to organize images of a collection into a tree structure according to user-defined properties. In this way, images can be adaptively placed with the desired semantic or visual correlations in the final visualization layout. Then, we design a two-step visualization optimization scheme to further optimize image layouts. As a result, multiple layout effects including layout shape and image overlap ratio can be effectively controlled to guarantee a satisfactory visualization. Finally, we also propose a tree-transfer scheme such that visualization layouts can be adaptively changed when users select different "images of interest." We demonstrate the effectiveness of our proposed approach through the comparisons with state-of-the-art visualization techniques. PMID:26186799

  13. An optimization-based parallel particle filter for multitarget tracking

    NASA Astrophysics Data System (ADS)

    Sutharsan, S.; Sinha, A.; Kirubarajan, T.; Farooq, M.

    2005-09-01

    Particle filter based estimation is becoming more popular because it has the capability to effectively solve nonlinear and non-Gaussian estimation problems. However, the particle filter has high computational requirements and the problem becomes even more challenging in the case of multitarget tracking. In order to perform data association and estimation jointly, typically an augmented state vector of target dynamics is used. As the number of targets increases, the computation required for each particle increases exponentially. Thus, parallelization is a possibility in order to achieve the real time feasibility in large-scale multitarget tracking applications. In this paper, we present a real-time feasible scheduling algorithm that minimizes the total computation time for the bus connected heterogeneous primary-secondary architecture. This scheduler is capable of selecting the optimal number of processors from a large pool of secondary processors and mapping the particles among the selected processors. Furthermore, we propose a less communication intensive parallel implementation of the particle filter without sacrificing tracking accuracy using an efficient load balancing technique, in which optimal particle migration is ensured. In this paper, we present the mathematical formulations for scheduling the particles as well as for particle migration via load balancing. Simulation results show the tracking performance of our parallel particle filter and the speedup achieved using parallelization.

  14. Binary Particle Swarm Optimization based Biclustering of Web Usage Data

    NASA Astrophysics Data System (ADS)

    Rathipriya, R.; Thangavel, K.; Bagyamani, J.

    2011-07-01

    Web mining is the nontrivial process to discover valid, novel, potentially useful knowledge from web data using the data mining techniques or methods. It may give information that is useful for improving the services offered by web portals and information access and retrieval tools. With the rapid development of biclustering, more researchers have applied the biclustering technique to different fields in recent years. When biclustering approach is applied to the web usage data it automatically captures the hidden browsing patterns from it in the form of biclusters. In this work, swarm intelligent technique is combined with biclustering approach to propose an algorithm called Binary Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The main objective of this algorithm is to retrieve the global optimal bicluster from the web usage data. These biclusters contain relationships between web users and web pages which are useful for the E-Commerce applications like web advertising and marketing. Experiments are conducted on real dataset to prove the efficiency of the proposed algorithms.

  15. Risky business: the risk-based, risk-sharing capitated HMO.

    PubMed

    Kazahaya, G I

    1986-08-01

    Hospitals are encountering a new type of HMO--the risk-based, risk-sharing capitated HMO. This new HMO arrangement redefines the role of the hospital, the physicians, and the HMO plan involved. Instead of placing the HMO at risk, the hospital and physicians are now financially responsible for services covered under the HMO plan. The capitated HMO is reduced to a third-party payer, serving as a broker between subscribers and providers. In this first of two articles on capitated HMOs, the risk-based, risk-sharing capitated HMO and its relationship to hospitals and physicians is defined. The second article will take this definition and apply it to managing, monitoring, and reporting on these types of programs from an accounting perspective. PMID:10277301

  16. Chemical Mixture Risk Assessment Additivity-Based Approaches

    EPA Science Inventory

    Powerpoint presentation includes additivity-based chemical mixture risk assessment methods. Basic concepts, theory and example calculations are included. Several slides discuss the use of "common adverse outcomes" in analyzing phthalate mixtures.

  17. Evidence-based risk assessment and recommendations for physical activity clearance: respiratory disease.

    PubMed

    Eves, Neil D; Davidson, Warren J

    2011-07-01

    The 2 most common respiratory diseases are chronic obstructive pulmonary disease (COPD) and asthma. Growing evidence supports the benefits of exercise for all patients with these diseases. Due to the etiology of COPD and the pathophysiology of asthma, there may be some additional risks of exercise for these patients, and hence accurate risk assessment and clearance is needed before patients start exercising. The purpose of this review was to evaluate the available literature regarding the risks of exercise for patients with respiratory disease and provide evidence-based recommendations to guide the screening process. A systematic review of 4 databases was performed. The literature was searched to identify adverse events specific to exercise. For COPD, 102 randomized controlled trials that involved an exercise intervention were included (n = 6938). No study directly assessed the risk of exercise, and only 15 commented on exercise-related adverse events. For asthma, 30 studies of mixed methodologies were included (n = 1278). One study directly assessed the risk of exercise, and 15 commented on exercise-related adverse events. No exercise-related fatalities were reported. The majority of adverse events in COPD patients were musculoskeletal or cardiovascular in nature. In asthma patients, exercise-induced bronchoconstriction and (or) asthma symptoms were the primary adverse events. There is no direct evidence regarding the risk of exercise for patients with COPD or asthma. However, based on the available literature, it would appear that with adequate screening and optimal medical therapy, the risk of exercise for these respiratory patients is low. PMID:21800949

  18. Bone Mineral Density and Fracture Risk Assessment to Optimize Prosthesis Selection in Total Hip Replacement

    PubMed Central

    Pétursson, Þröstur; Edmunds, Kyle Joseph; Gíslason, Magnús Kjartan; Magnússon, Benedikt; Magnúsdóttir, Gígja; Halldórsson, Grétar; Jónsson, Halldór; Gargiulo, Paolo

    2015-01-01

    The variability in patient outcome and propensity for surgical complications in total hip replacement (THR) necessitates the development of a comprehensive, quantitative methodology for prescribing the optimal type of prosthetic stem: cemented or cementless. The objective of the research presented herein was to describe a novel approach to this problem as a first step towards creating a patient-specific, presurgical application for determining the optimal prosthesis procedure. Finite element analysis (FEA) and bone mineral density (BMD) calculations were performed with ten voluntary primary THR patients to estimate the status of their operative femurs before surgery. A compilation model of the press-fitting procedure was generated to define a fracture risk index (FRI) from incurred forces on the periprosthetic femoral head. Comparing these values to patient age, sex, and gender elicited a high degree of variability between patients grouped by implant procedure, reinforcing the notion that age and gender alone are poor indicators for prescribing prosthesis type. Additionally, correlating FRI and BMD measurements indicated that at least two of the ten patients may have received nonideal implants. This investigation highlights the utility of our model as a foundation for presurgical software applications to assist orthopedic surgeons with selecting THR prostheses. PMID:26417376

  19. Bone Mineral Density and Fracture Risk Assessment to Optimize Prosthesis Selection in Total Hip Replacement.

    PubMed

    Pétursson, Þröstur; Edmunds, Kyle Joseph; Gíslason, Magnús Kjartan; Magnússon, Benedikt; Magnúsdóttir, Gígja; Halldórsson, Grétar; Jónsson, Halldór; Gargiulo, Paolo

    2015-01-01

    The variability in patient outcome and propensity for surgical complications in total hip replacement (THR) necessitates the development of a comprehensive, quantitative methodology for prescribing the optimal type of prosthetic stem: cemented or cementless. The objective of the research presented herein was to describe a novel approach to this problem as a first step towards creating a patient-specific, presurgical application for determining the optimal prosthesis procedure. Finite element analysis (FEA) and bone mineral density (BMD) calculations were performed with ten voluntary primary THR patients to estimate the status of their operative femurs before surgery. A compilation model of the press-fitting procedure was generated to define a fracture risk index (FRI) from incurred forces on the periprosthetic femoral head. Comparing these values to patient age, sex, and gender elicited a high degree of variability between patients grouped by implant procedure, reinforcing the notion that age and gender alone are poor indicators for prescribing prosthesis type. Additionally, correlating FRI and BMD measurements indicated that at least two of the ten patients may have received nonideal implants. This investigation highlights the utility of our model as a foundation for presurgical software applications to assist orthopedic surgeons with selecting THR prostheses. PMID:26417376

  20. Biomechanical optimization of subject-specific implant positioning for femoral head resurfacing to reduce fracture risk.

    PubMed

    Miles, Brad; Kolos, Elizabeth; Appleyard, Richard; Theodore, Willy; Zheng, Keke; Li, Qing; Ruys, Andrew J

    2016-07-01

    Peri-prosthetic femoral neck fracture after femoral head resurfacing can be either patient-related or surgical technique-related. The study aimed to develop a patient-specific finite element modelling technique that can reliably predict an optimal implant position and give minimal strain in the peri-prosthetic bone tissue, thereby reducing the risk of peri-prosthetic femoral neck fracture. The subject-specific finite element modelling was integrated with optimization techniques including design of experiments to best possibly position the implant for achieving minimal strain for femoral head resurfacing. Sample space was defined by varying the floating point to find the extremes at which the cylindrical reaming operation actually cuts into the femoral neck causing a notch during hip resurfacing surgery. The study showed that the location of the maximum strain, for all non-notching positions, was on the superior femoral neck, in the peri-prosthetic bone tissue. It demonstrated that varus positioning resulted in a higher strain, while valgus positioning reduced the strain, and further that neutral version had a lower strain. PMID:27098752

  1. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the

  2. Risk based tiered approach (RBTASM) for pollution prevention.

    PubMed

    Elves, R G; Sweeney, L M; Tomljanovic, C

    1997-11-01

    Effective management of human health and ecological hazards in the manufacturing and maintenance environment can be achieved by focusing on the risks associated with these operations. The NDCEE Industrial Health Risk Assessment (IHRA) Program is developing a comprehensive approach to risk analysis applied to existing processes and used to evaluate alternatives. The IHRA Risk-Based Tiered Approach (RBTASM) builds on the American Society for Testing and Materials (ASTM) Risk-Based Corrective Action (RBCA) effort to remediate underground storage tanks. Using readily available information, a semi-quantitative ranking of alternatives based on environmental, safety, and occupational health criteria was produced. A Rapid Screening Assessment of alternative corrosion protection products was performed on behalf of the Joint Group on Acquisition Pollution Prevention (JG-APP). Using the RBTASM in pollution prevention alternative selection required higher tiered analysis and more detailed assessment of human health risks under site-specific conditions. This example illustrates the RBTASM for a organic finishing line using three different products (one conventional spray and two alternative powder coats). The human health risk information developed using the RBTASM is considered along with product performance, regulatory, and cost information by risk managers downselecting alternatives for implementation or further analysis. PMID:9433667

  3. Quantifying the risks of unexploded ordnance at closed military bases.

    PubMed

    MacDonald, Jacqueline A; Small, Mitchell J; Morgan, M Granger

    2009-01-15

    Some 1,976 sites at closed military bases in the United States are contaminated with unexploded ordnance (UXO) left over from live-fire weapons training. These sites present risks to civilians who might come into contact with the UXO and cause it to explode. This paper presents the first systems analysis model for assessing the explosion risks of UXO at former military training ranges. We develop a stochastic model for estimating the probability of exposure to and explosion of UXO, before and after site cleanup. An application of the model to a 310-acre parcel at Fort Ord, California, shows that substantial risk can remain even after a site is declared clean. We estimate that risk to individual construction workers of encountering UXO that explodes would range from 4 x 10(-4) to 5 x 10(-2), depending on model assumptions, well above typical Occupational Safety and Health Administration (OSHA) and U.S. Environmental Protection Agency (EPA) target risk levels of 10(-4) to 10(-6). In contrast, a qualitative UXO risk assessment method, the Munitions and Explosives of Concern Hazard Assessment (MEC HA), developed by an interagency work group led by the EPA, indicates that the explosion risk at the case study site is low and "compatible with current and determined or reasonably anticipated future risk." We argue that a quantitative approach, like that illustrated in this paper, is necessary to provide a more complete picture of risks and the opportunities for risk reduction. PMID:19238949

  4. Risk-based assessment of the surety of information systems

    SciTech Connect

    Jansma, R.; Fletcher, S.; Halbgewachs, R.; Lim, J.; Murphy, M.; Sands, P.; Wyss, G.

    1995-03-01

    Correct operation of an information system requires a balance of ``surety`` domains -- access control (confidentiality), integrity, utility, availability, and safety. However, traditional approaches provide little help on how to systematically analyze and balance the combined impact of surety requirements on a system. The key to achieving information system surety is identifying, prioritizing, and mitigating the sources of risk that may lead to system failure. Consequently, the authors propose a risk assessment methodology that provides a framework to guide the analyst in identifying and prioritizing sources of risk and selecting mitigation techniques. The framework leads the analyst to develop a risk-based system model for balancing the surety requirements and quantifying the effectiveness and combined impact of the mitigation techniques. Such a model allows the information system designer to make informed trade-offs based on the most effective risk-reduction measures.

  5. Risk-based decision making for terrorism applications.

    PubMed

    Dillon, Robin L; Liebe, Robert M; Bestafka, Thomas

    2009-03-01

    This article describes the anti-terrorism risk-based decision aid (ARDA), a risk-based decision-making approach for prioritizing anti-terrorism measures. The ARDA model was developed as part of a larger effort to assess investments for protecting U.S. Navy assets at risk and determine whether the most effective anti-terrorism alternatives are being used to reduce the risk to the facilities and war-fighting assets. With ARDA and some support from subject matter experts, we examine thousands of scenarios composed of 15 attack modes against 160 facility types on two installations and hundreds of portfolios of 22 mitigation alternatives. ARDA uses multiattribute utility theory to solve some of the commonly identified challenges in security risk analysis. This article describes the process and documents lessons learned from applying the ARDA model for this application. PMID:19187486

  6. Multi Agent System Based Path Optimization Service for Mobile Robot

    NASA Astrophysics Data System (ADS)

    Kim, Huyn; Chung, Taechoong

    If a person drives a optimization route recommended by his navigation, considering the person has specific driving habits and propensity and there are many circumstances changes, it is said that the route recommended by a navigation is not optimized.

  7. An optimization-based iterative algorithm for recovering fluorophore location

    NASA Astrophysics Data System (ADS)

    Yi, Huangjian; Peng, Jinye; Jin, Chen; He, Xiaowei

    2015-10-01

    Fluorescence molecular tomography (FMT) is a non-invasive technique that allows three-dimensional visualization of fluorophore in vivo in small animals. In practical applications of FMT, however, there are challenges in the image reconstruction since it is a highly ill-posed problem due to the diffusive behaviour of light transportation in tissue and the limited measurement data. In this paper, we presented an iterative algorithm based on an optimization problem for three dimensional reconstruction of fluorescent target. This method alternates weighted algebraic reconstruction technique (WART) with steepest descent method (SDM) for image reconstruction. Numerical simulations experiments and physical phantom experiment are performed to validate our method. Furthermore, compared to conjugate gradient method, the proposed method provides a better three-dimensional (3D) localization of fluorescent target.

  8. Optimizing Maintenance of Constraint-Based Database Caches

    NASA Astrophysics Data System (ADS)

    Klein, Joachim; Braun, Susanne

    Caching data reduces user-perceived latency and often enhances availability in case of server crashes or network failures. DB caching aims at local processing of declarative queries in a DBMS-managed cache close to the application. Query evaluation must produce the same results as if done at the remote database backend, which implies that all data records needed to process such a query must be present and controlled by the cache, i. e., to achieve “predicate-specific” loading and unloading of such record sets. Hence, cache maintenance must be based on cache constraints such that “predicate completeness” of the caching units currently present can be guaranteed at any point in time. We explore how cache groups can be maintained to provide the data currently needed. Moreover, we design and optimize loading and unloading algorithms for sets of records keeping the caching units complete, before we empirically identify the costs involved in cache maintenance.

  9. Further developments in LP-based optimal power flow

    SciTech Connect

    Alsac, O.; Bright, J.; Prais, M.; Stott, B.P )

    1990-08-01

    Over the past twenty five years, the optimal power flow (OPF) approach that has received the most widespread practical application is the one based on linear programming (LP). Special customized LP methods have been utilized primarily for fast reliable security-constrained dispatch using decoupled separable OPF problem formulations. They have been used in power system planning, operations and control. Nevertheless, while the LP approach has a number of important attributes, its range of application in the OPF field has remained somewhat restricted. This paper describes further developments that have transformed the LP approach into a truly general-purpose OPF solver, with computational and other advantages over even recent nonlinear programming (NLP) methods. The nonseparable loss-minimization problem can now be solved, giving the same results as NLP on power systems of any size and type.

  10. Isogeometric analysis for parameterized LSM-based structural topology optimization

    NASA Astrophysics Data System (ADS)

    Wang, Yingjun; Benson, David J.

    2016-01-01

    In this paper, we present an accurate and efficient isogeometric topology optimization method that integrates the non-uniform rational B-splines based isogeometric analysis and the parameterized level set method for minimal compliance problems. The same NURBS basis functions are used to parameterize the level set function and evaluate the objective function, and therefore the design variables are associated with the control points. The coefficient matrix that parameterizes the level set function is set up by a collocation method that uses the Greville abscissae. The zero-level set boundary is obtained from the interpolation points corresponding to the vertices of the knot spans. Numerical examples demonstrate the validity and efficiency of the proposed method.

  11. Density-based penalty parameter optimization on C-SVM.

    PubMed

    Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An

    2014-01-01

    The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall. PMID:25114978

  12. Density-Based Penalty Parameter Optimization on C-SVM

    PubMed Central

    Liu, Yun; Lian, Jie; Bartolacci, Michael R.; Zeng, Qing-An

    2014-01-01

    The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall. PMID:25114978

  13. Requirements based system level risk modeling

    NASA Technical Reports Server (NTRS)

    Meshkat, L.; Cornford, S. L.; Feather, M. S.

    2004-01-01

    The problem that we address in this paper is assessing the expected degree of success of the system or mission based on the degree to which each requirement is satisfied and the relative weight of the requirements.

  14. Optimizing timing performance of silicon photomultiplier-based scintillation detectors

    PubMed Central

    Yeom, Jung Yeol; Vinke, Ruud

    2013-01-01

    Precise timing resolution is crucial for applications requiring photon time-of-flight (ToF) information such as ToF positron emission tomography (PET). Silicon photomultipliers (SiPM) for PET, with their high output capacitance, are known to require custom preamplifiers to optimize timing performance. In this paper, we describe simple alternative front-end electronics based on a commercial low-noise RF preamplifier and methods that have been implemented to achieve excellent timing resolution. Two radiation detectors with L(Y)SO scintillators coupled to Hamamatsu SiPMs (MPPC S10362–33-050C) and front-end electronics based on an RF amplifier (MAR-3SM+), typically used for wireless applications that require minimal additional circuitry, have been fabricated. These detectors were used to detect annihilation photons from a Ge-68 source and the output signals were subsequently digitized by a high speed oscilloscope for offline processing. A coincident resolving time (CRT) of 147 ± 3 ps FWHM and 186 ± 3 ps FWHM with 3 × 3 × 5 mm3 and with 3 × 3 × 20 mm3 LYSO crystal elements were measured, respectively. With smaller 2 × 2 × 3 mm3 LSO crystals, a CRT of 125 ± 2 ps FWHM was achieved with slight improvement to 121 ± 3 ps at a lower temperature (15°C). Finally, with the 20 mm length crystals, a degradation of timing resolution was observed for annihilation photon interactions that occur close to the photosensor compared to shallow depth-of-interaction (DOI). We conclude that commercial RF amplifiers optimized for noise, besides their ease of use, can produce excellent timing resolution comparable to best reported values acquired with custom readout electronics. On the other hand, as timing performance degrades with increasing photon DOI, a head-on detector configuration will produce better CRT than a side-irradiated setup for longer crystals. PMID:23369872

  15. Communication: Optimal parameters for basin-hopping global optimization based on Tsallis statistics

    SciTech Connect

    Shang, C. Wales, D. J.

    2014-08-21

    A fundamental problem associated with global optimization is the large free energy barrier for the corresponding solid-solid phase transitions for systems with multi-funnel energy landscapes. To address this issue we consider the Tsallis weight instead of the Boltzmann weight to define the acceptance ratio for basin-hopping global optimization. Benchmarks for atomic clusters show that using the optimal Tsallis weight can improve the efficiency by roughly a factor of two. We present a theory that connects the optimal parameters for the Tsallis weighting, and demonstrate that the predictions are verified for each of the test cases.

  16. Perception of mobile phone and base station risks.

    PubMed

    Siegrist, Michael; Earle, Timothy C; Gutscher, Heinz; Keller, Carmen

    2005-10-01

    Perceptions of risks associated with mobile phones, base stations, and other sources of electromagnetic fields (EMF) were examined. Data from a telephone survey conducted in the German- and French-speaking parts of Switzerland are presented (N = 1,015). Participants assessed both risks and benefits associated with nine different sources of EMF. Trust in the authorities regulating these hazards was assessed as well. In addition, participants answered a set of questions related to attitudes toward EMF and toward mobile phone base stations. According to respondents' assessments, high-voltage transmission lines are the most risky source of EMF. Mobile phones and mobile phone base stations received lower risk ratings. Results showed that trust in authorities was positively associated with perceived benefits and negatively associated with perceived risks. People who use their mobile phones frequently perceived lower risks and higher benefits than people who use their mobile phones infrequently. People who believed they lived close to a base station did not significantly differ in their level of risks associated with mobile phone base stations from people who did not believe they lived close to a base station. Regarding risk regulation, a majority of participants were in favor of fixing limiting values based on the worst-case scenario. Correlations suggest that belief in paranormal phenomena is related to level of perceived risks associated with EMF. Furthermore, people who believed that most chemical substances cause cancer also worried more about EMF than people who did not believe that chemical substances are that harmful. Practical implications of the results are discussed. PMID:16297229

  17. Development and optimization of biofilm based algal cultivation

    NASA Astrophysics Data System (ADS)

    Gross, Martin Anthony

    This dissertation describes research done on biofilm based algal cultivation systems. The system that was developed in this work is the revolving algal biofilm cultivation system (RAB). A raceway-retrofit, and a trough-based pilot-scale RAB system were developed and investigated. Each of the systems significantly outperformed a control raceway pond in side-by-side tests. Furthermore the RAB system was found to require significantly less water than the raceway pond based cultivation system. Lastly a TEA/LCA analysis was conducted to evaluate the economic and life cycle of the RAB cultivation system in comparison to raceway pond. It was found that the RAB system was able to grow algae at a lower cost and was shown to be profitable at a smaller scale than the raceway pond style of algal cultivation. Additionally the RAB system was projected to have lower GHG emissions, and better energy and water use efficiencies in comparison to a raceway pond system. Furthermore, fundamental research was conducted to identify the optimal material for algae to attach on. A total of 28 materials with a smooth surface were tested for initial cell colonization and it was found that the tetradecane contact angle of the materials had a good correlation with cell attachment. The effects of surface texture were evaluated using mesh materials (nylon, polypropylene, high density polyethylene, polyester, aluminum, and stainless steel) with openings ranging from 0.05--6.40 mm. It was found that both surface texture and material composition influence algal attachment.

  18. Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices

    PubMed Central

    Ali, Abdelrahman; Siddharth, Siddharth; Syed, Zainab; El-Sheimy, Naser

    2012-01-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO)-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs) when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS) applications.

  19. Parallel performance optimizations on unstructured mesh-based simulations

    DOE PAGESBeta

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-06-01

    This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less

  20. A controller based on Optimal Type-2 Fuzzy Logic: systematic design, optimization and real-time implementation.

    PubMed

    Fayek, H M; Elamvazuthi, I; Perumal, N; Venkatesh, B

    2014-09-01

    A computationally-efficient systematic procedure to design an Optimal Type-2 Fuzzy Logic Controller (OT2FLC) is proposed. The main scheme is to optimize the gains of the controller using Particle Swarm Optimization (PSO), then optimize only two parameters per type-2 membership function using Genetic Algorithm (GA). The proposed OT2FLC was implemented in real-time to control the position of a DC servomotor, which is part of a robotic arm. The performance judgments were carried out based on the Integral Absolute Error (IAE), as well as the computational cost. Various type-2 defuzzification methods were investigated in real-time. A comparative analysis with an Optimal Type-1 Fuzzy Logic Controller (OT1FLC) and a PI controller, demonstrated OT2FLC׳s superiority; which is evident in handling uncertainty and imprecision induced in the system by means of noise and disturbances. PMID:24962934

  1. PERSPECTIVE: Technical fixes and climate change: optimizing for risks and consequences

    NASA Astrophysics Data System (ADS)

    Rasch, Philip J.

    2010-09-01

    Scientists and society in general are becoming increasingly concerned about the risks of climate change from the emission of greenhouse gases (IPCC 2007). Yet emissions continue to increase (Raupach et al 2007), and achieving reductions soon enough to avoid large and undesirable impacts requires a near-revolutionary global transformation of energy and transportation systems (Hoffert et al 1998). The size of the transformation and lack of an effective societal response have motivated some to explore other quite controversial strategies to mitigate some of the planetary consequences of these emissions. These strategies have come to be known as geoengineering: 'the deliberate manipulation of the planetary environment to counteract anthropogenic climate change' (Keith 2000). Concern about society's inability to reduce emissions has driven a resurgence in interest in geoengineering, particularly following the call for more research in Crutzen (2006). Two classes of geoengineering solutions have developed: (1) methods to draw CO2 out of the atmosphere and sequester it in a relatively benign form; and (2) methods that change the energy flux entering or leaving the planet without modifying CO2 concentrations by, for example, changing the planetary albedo. Only the latter methods are considered here. Summaries of many of the methods, scientific questions, and issues of testing and implementation are discussed in Launder and Thompson (2009) and Royal Society (2009). The increased attention indicates that geoengineering is not a panacea and all strategies considered will have risks and consequences (e.g. Robock 2008, Trenberth and Dai 2007). Recent studies involving comprehensive Earth system models can provide insight into subtle interactions between components of the climate system. For example Rasch et al (2009) found that geoengineering by changing boundary clouds will not simultaneously 'correct' global averaged surface temperature, precipitation, and sea ice to present

  2. Development and use of risk-based inspection guides

    SciTech Connect

    Taylor, J.H.; Fresco, A.; Higgins, J.; Usher, J.; Long, S.M.

    1989-06-01

    Risk-based system inspection guides, for nuclear power plants which have been subjected to a probabilistic risk assessment (PRA), have been developed to provide guidance to NRC inspectors in prioritizing their inspection activities. Systems are prioritized, and then dominant component failure modes and human errors within those systems are identified for the above-stated purposes. Examples of applications to specific types of NRC inspection activities are also presented. Thus, the report provides guidance for both the development and use of risk-based system inspection guides. Work is proceeding to develop a method methodology for risk-based guidance for nuclear power plants not subject to a PRA. 18 refs., 1 fig.

  3. Efficacy of Code Optimization on Cache-based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  4. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model

  5. An Optimization-Based Approach to Injector Element Design

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar; Turner, Jim (Technical Monitor)

    2000-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for gaseous oxygen/gaseous hydrogen (GO2/GH2) injector elements. A swirl coaxial element and an unlike impinging element (a fuel-oxidizer-fuel triplet) are used to facilitate the study. The elements are optimized in terms of design variables such as fuel pressure drop, APf, oxidizer pressure drop, deltaP(sub f), combustor length, L(sub comb), and full cone swirl angle, theta, (for the swirl element) or impingement half-angle, alpha, (for the impinging element) at a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for both element types. Method i is then used to generate response surfaces for each dependent variable for both types of elements. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail for each element type. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the element design is illustrated. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues

  6. A Logical levothyroxine dose Individualization: Optimization Approach at discharge from Radioiodine therapy ward and during follow-up in patients of Differentiated Thyroid Carcinoma: Balancing the Risk based strategy and the practical issues and challenges: Experience and Views of a large volume referral centre in India

    PubMed Central

    Basu, Sandip; Abhyankar, Amit; Asopa, Ramesh; Chaukar, Devendra; DCruz, Anil K

    2013-01-01

    In this communication, the authors discuss the issue of individualization of thyrotropin suppressive therapy in differentiated thyroid carcinoma (DTC) patients and share their views with respect to optimizing the dose of levothyroxine (LT) prescription both during discharge from radioiodine therapy ward and during follow-up. The changing management paradigm at our Institute during post-thyroidectomy period and during the preparation for radioiodine scan is also briefly highlighted. Five factors can be identified as important determinants for the dose individualization approach: (1) Persistence or absence of metastatic disease, (2) the risk characteristics of the patient and the tumor (3) patient's clinical profile, symptomatology, and contraindications (4) the feasibility to ensure a proper thyroid stimulating hormone TSH suppression level (depends on patient's socio-economic and educational background, the connectivity with the local physician and his expertise) (5) time period elapsed since initial diagnosis. While discussing each individual case scenario, the authors, based upon their experience in one of the busiest thyroid cancer referral centers in the country, discuss certain unaddressed points in the current guideline recommendations, deviations made and some challenges toward employing them into practice, which could be situation and center specific. In addition to these, the value of clinical examination, patient profile and detailed enquiry about clinical symptomatology by the attending physician in each follow-up visit cannot be overemphasized. According to the authors, this aspect, quite important for dose determination in an individual, is relatively underrepresented in the present guidelines. It would also be worthwhile to follow a conservative approach (till clear data emerges) in patients who have characteristics of “high-risk” disease, but are clinically and biochemically disease free, if no medical contraindications exist and patient

  7. An integrated optimization method for river water quality management and risk analysis in a rural system.

    PubMed

    Liu, J; Li, Y P; Huang, G H; Zeng, X T; Nie, S

    2016-01-01

    In this study, an interval-stochastic-based risk analysis (RSRA) method is developed for supporting river water quality management in a rural system under uncertainty (i.e., uncertainties exist in a number of system components as well as their interrelationships). The RSRA method is effective in risk management and policy analysis, particularly when the inputs (such as allowable pollutant discharge and pollutant discharge rate) are expressed as probability distributions and interval values. Moreover, decision-makers' attitudes towards system risk can be reflected using a restricted resource measure by controlling the variability of the recourse cost. The RSRA method is then applied to a real case of water quality management in the Heshui River Basin (a rural area of China), where chemical oxygen demand (COD), total nitrogen (TN), total phosphorus (TP), and soil loss are selected as major indicators to identify the water pollution control strategies. Results reveal that uncertainties and risk attitudes have significant effects on both pollutant discharge and system benefit. A high risk measure level can lead to a reduced system benefit; however, this reduction also corresponds to raised system reliability. Results also disclose that (a) agriculture is the dominant contributor to soil loss, TN, and TP loads, and abatement actions should be mainly carried out for paddy and dry farms; (b) livestock husbandry is the main COD discharger, and abatement measures should be mainly conducted for poultry farm; (c) fishery accounts for a high percentage of TN, TP, and COD discharges but a has low percentage of overall net benefit, and it may be beneficial to cease fishery activities in the basin. The findings can facilitate the local authority in identifying desired pollution control strategies with the tradeoff between socioeconomic development and environmental sustainability. PMID:26310705

  8. Particle swarm optimization algorithm based low cost magnetometer calibration

    NASA Astrophysics Data System (ADS)

    Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.

    2011-12-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments

  9. Task-based optimization of image reconstruction in breast CT

    NASA Astrophysics Data System (ADS)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2014-03-01

    We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.

  10. Lymphatic Filariasis Transmission Risk Map of India, Based on a Geo-Environmental Risk Model

    PubMed Central

    Sabesan, Shanmugavelu; Raju, Konuganti Hari Kishan; Srivastava, Pradeep Kumar; Jambulingam, Purushothaman

    2013-01-01

    Abstract The strategy adopted by a global program to interrupt transmission of lymphatic filariasis (LF) is mass drug administration (MDA) using chemotherapy. India also followed this strategy by introducing MDA in the historically known endemic areas. All other areas, which remained unsurveyed, were presumed to be nonendemic and left without any intervention. Therefore, identification of LF transmission risk areas in the entire country has become essential so that they can be targeted for intervention. A geo-environmental risk model (GERM) developed earlier was used to create a filariasis transmission risk map for India. In this model, a Standardized Filariasis Transmission Risk Index (SFTRI, based on geo-environmental risk variables) was used as a predictor of transmission risk. The relationship between SFTRI and endemicity (historically known) of an area was quantified by logistic regression analysis. The quantified relationship was validated by assessing the filarial antigenemia status of children living in the unsurveyed areas through a ground truth study. A significant positive relationship was observed between SFTRI and the endemicity of an area. Overall, the model prediction of filarial endemic status of districts was found to be correct in 92.8% of the total observations. Thus, among the 190 districts hitherto unsurveyed, as many as 113 districts were predicted to be at risk, and the remaining at no risk. The GERM developed on geographic information system (GIS) platform is useful for LF spatial delimitation on a macrogeographic/regional scale. Furthermore, the risk map developed will be useful for the national LF elimination program by identifying areas at risk for intervention and for undertaking surveillance in no-risk areas. PMID:23808973

  11. Optimizing risk mitigation in management of sexual offenders: a structural model.

    PubMed

    Lamade, Raina; Gabriel, Adeena; Prentky, Robert

    2011-01-01

    Sexual violence is an insidious and pervasive problem that insinuates itself into all aspects of contemporary society. It can neither be mitigated nor adequately controlled through current socio-legal practices. A more promising approach must embrace four integrated elements: (1) public policy, (2) primary prevention, (3) statutory management, and (3) secondary intervention. In the present paper we tackle the 3rd and 4th elements by proposing an integrated model for reducing and managing sexual violence among known sex offenders. Relying on the highly effective Risk-Need-Responsivity (RNR) model as the core of our Sex Offender Risk Mitigation and Management Model (SORM(3)), we draw together evidence based practices from clinical interventions and risk assessment strategies. Developed by Andrews & Bonta (2006), RNR has a strong empirical track record of efficacy when applied to diverse samples of offenders, including sex offenders (Hanson, Bourgon, Helmus, & Hodgson, 2009). We offer a detailed structural model that seeks to provide a more seamless integration of risk assessment with management and discretionary decisions, including a primary focus on RNR-based post-release aftercare. We end with the mantra that sex offender treatment alone will never effectively mitigate sexual violence in society, since the problem is not confined to the handful of offenders who spend time in prison and are offered some limited exposure to treatment. Any truly effective model must go well beyond the management of those known to be violent and embrace a comprehensive and integrated approach that begins by recognizing the seeds of sexual violence sown by society. Such a public health paradigm places victims - not offenders - at the center, forcing society to come to address the full gamut of hazards that fuel sexual violence. PMID:21565406

  12. Optimal control of switched linear systems based on Migrant Particle Swarm Optimization algorithm

    NASA Astrophysics Data System (ADS)

    Xie, Fuqiang; Wang, Yongji; Zheng, Zongzhun; Li, Chuanfeng

    2009-10-01

    The optimal control problem for switched linear systems with internally forced switching has more constraints than with externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization (Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy of the proposed algorithm is illustrated via a numerical example.

  13. Prevalence of Optimal Treatment Regimens in Patients with Apparent Treatment Resistant Hypertension Based on Office BP in a Community-Based Practice Network

    PubMed Central

    Egan, Brent M.; Zhao, Yumin; Li, Jiexiang; Brzezinski, W. Adam; Todoran, Thomas M.; Brook, Robert D.; Calhoun, David A.

    2013-01-01

    Hypertensive patients with clinic blood pressure (BP) uncontrolled on ≥3 antihypertensive medications, i.e., apparent treatment resistant hypertension (aTRH) comprise ~28%–30% of all uncontrolled patients in the U.S. However, the proportion receiving these medications in optimal doses is unknown; aTRH is used, since treatment adherence, BP measurement artifacts, and optimal therapy were not available in electronic record data from our >200 community-based clinics Outpatient QUuality Improvement Network (OQUIN). This study sought to define the proportion of uncontrolled hypertensives with aTRH on optimal regimens and clinical factors associated with optimal therapy. During 2007–2010, 468,877 hypertensive patients met inclusion criteria. BP <140/<90 defined control. Multivariable logistic regression was used to assess variables independently associated with ‘optimal therapy’ (prescription of diuretic and ≥2 other BP medications at ≥50% of maximum recommended hypertension doses). Among 468,877 hypertensives, 147,635 (31.5%) were uncontrolled; among uncontrolled hypertensives, 44,684 were prescribed ≥3 BP medications (30.3%) of which 22,189 (15.0%) were prescribed ‘optimal’ therapy. Clinical factors independently associated with optimal BP therapy included black race (OR 1.40 [95% CI 1.32–1.49]), chronic kidney disease (1.31 [1.25–1.38]) diabetes (1.30 [1.24–1.37]), and coronary heart disease risk equivalent status (1.29 [1.14–1.46]). Clinicians more often prescribe optimal therapy for aTRH when cardiovascular risk is greater and treatment goals lower. Approximately one in seven of all uncontrolled hypertensives and one in two with uncontrolled aTRH are prescribed ≥3 BP medications in optimal regimens. Prescribing more optimal pharmacotherapy, for uncontrolled hypertensives including aTRH, confirmed with out-of-office BP, could improve hypertension control. PMID:23918752

  14. Prevalence of optimal treatment regimens in patients with apparent treatment-resistant hypertension based on office blood pressure in a community-based practice network.

    PubMed

    Egan, Brent M; Zhao, Yumin; Li, Jiexiang; Brzezinski, W Adam; Todoran, Thomas M; Brook, Robert D; Calhoun, David A

    2013-10-01

    Hypertensive patients with clinical blood pressure (BP) uncontrolled on ≥3 antihypertensive medications (ie, apparent treatment-resistant hypertension [aTRH]) comprise ≈28% to 30% of all uncontrolled patients in the United States. However, the proportion receiving these medications in optimal doses is unknown; aTRH is used because treatment adherence and measurement artifacts were not available in electronic record data from our >200 community-based clinics Outpatient Quality Improvement Network. This study sought to define the proportion of uncontrolled hypertensives with aTRH on optimal regimens and clinical factors associated with optimal therapy. During 2007-2010, 468 877 hypertensive patients met inclusion criteria. BP <140/<90 mm Hg defined control. Multivariable logistic regression was used to assess variables independently associated with optimal therapy (prescription of diuretic and ≥2 other BP medications at ≥50% of maximum recommended hypertension doses). Among 468 877 hypertensives, 147 635 (31.5%) were uncontrolled; among uncontrolled hypertensives, 44 684 were prescribed ≥3 BP medications (30.3%), of whom 22 189 (15.0%) were prescribed optimal therapy. Clinical factors independently associated with optimal BP therapy included black race (odds ratio, 1.40 [95% confidence interval, 1.32-1.49]), chronic kidney disease (1.31 [1.25-1.38]), diabetes mellitus (1.30 [1.24-1.37]), and coronary heart disease risk equivalent status (1.29 [1.14-1.46]). Clinicians more often prescribe optimal therapy for aTRH when cardiovascular risk is greater and treatment goals lower. Approximately 1 in 7 of all uncontrolled hypertensives and 1 in 2 with uncontrolled aTRH are prescribed ≥3 BP medications in optimal regimens. Prescribing more optimal pharmacotherapy for uncontrolled hypertensives including aTRH, confirmed with out-of-office BP, could improve hypertension control. PMID:23918752

  15. Volcanic risk: mitigation of lava flow invasion hazard through optimized barrier configuration

    NASA Astrophysics Data System (ADS)

    Scifoni, S.; Coltelli, M.; Marsella, M.; Napoleoni, Q.; Del Negro, C.; Proietti, C.; Vicari, A.

    2009-04-01

    In order to mitigate the destructive effects of lava flows along volcanic slopes, the building of artificial barriers is a fundamental action for controlling and slowing down the lava flow advance, as experienced during a few recent eruptions of Etna. The simulated lava path can be used to define an optimize project to locate the work but for a timely action it is also necessary to quickly construct a barrier. Therefore this work investigates different type of engineering work that can be adopted to build up a lava containing barrier for improving the efficiency of the structure. From the analysis of historical cases it is clear that barriers were generally constructed by building up earth, lava blocks and incoherent, low density material. This solution implies complex operational constraints and logistical problems that justify the effort of looking for alternative design. Moreover for optimizing the barrier construction an alternative project of gabion-made barrier was here proposed. In this way the volume of mobilized material is lower than that for a earth barrier, thus reducing the time needed for build up the structure. A second crucial aspect to be considered is the geometry of the barrier which, is one of the few parameters that can be modulated, the others being linked to the morphological and topographical characteristics of the ground. Once the walls have been realized, it may be necessary to be able to expand the structure vertically. The use of gabion has many advantages over loose riprap (earthen walls) owing to their modularity and capability to be stacked in various shapes. Furthermore, the elements which are not inundated by lava can be removed and rapidly used for other barriers. The combination between numerical simulations and gabions will allow a quicker mitigation of risk on lava flows and this is an important aspect for a civil protection intervention in emergency cases.

  16. Optimal illumination for visual enhancement based on color entropy evaluation.

    PubMed

    Shen, Junfei; Chang, Shengqian; Wang, Huihui; Zheng, Zhenrong

    2016-08-22

    Object visualization is influenced by the spectral distribution of an illuminant impinging upon it. In this paper, we proposed a color entropy evaluation method to provide the optimal illumination that best helps surgeons distinguish tissue features. The target-specific optimal illumination was obtained by maximizing the color entropy value of our sample tissue, whose spectral reflectance was measured using multispectral imaging. Sample images captured under optimal light were compared with that under commercial white light emitting diodes (3000K, 4000K and 5500K). Results showed images under the optimized illuminant had better visual performance such as more subtle details exhibited. PMID:27557255

  17. Population-based absolute risk estimation with survey data.

    PubMed

    Kovalchik, Stephanie A; Pfeiffer, Ruth M

    2014-04-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  18. Optimizing timing performance of silicon photomultiplier-based scintillation detectors.

    PubMed

    Yeom, Jung Yeol; Vinke, Ruud; Levin, Craig S

    2013-02-21

    Precise timing resolution is crucial for applications requiring photon time-of-flight (ToF) information such as ToF positron emission tomography (PET). Silicon photomultipliers (SiPM) for PET, with their high output capacitance, are known to require custom preamplifiers to optimize timing performance. In this paper, we describe simple alternative front-end electronics based on a commercial low-noise RF preamplifier and methods that have been implemented to achieve excellent timing resolution. Two radiation detectors with L(Y)SO scintillators coupled to Hamamatsu SiPMs (MPPC S10362-33-050C) and front-end electronics based on an RF amplifier (MAR-3SM+), typically used for wireless applications that require minimal additional circuitry, have been fabricated. These detectors were used to detect annihilation photons from a Ge-68 source and the output signals were subsequently digitized by a high speed oscilloscope for offline processing. A coincident resolving time (CRT) of 147 ± 3 ps FWHM and 186 ± 3 ps FWHM with 3 × 3 × 5 mm(3) and with 3 × 3 × 20 mm(3) LYSO crystal elements were measured, respectively. With smaller 2 × 2 × 3 mm(3) LSO crystals, a CRT of 125 ± 2 ps FWHM was achieved with slight improvement to 121 ± 3 ps at a lower temperature (15° C). Finally, with the 20 mm length crystals, a degradation of timing resolution was observed for annihilation photon interactions that occur close to the photosensor compared to shallow depth-of-interaction (DOI). We conclude that commercial RF amplifiers optimized for noise, besides their ease of use, can produce excellent timing resolution comparable to best reported values acquired with custom readout electronics. On the other hand, as timing performance degrades with increasing photon DOI, a head-on detector configuration will produce better CRT than a side-irradiated setup for longer crystals. PMID:23369872

  19. Dosimetric Effects of Magnetic Resonance Imaging-assisted Radiotherapy Planning: Dose Optimization for Target Volumes at High Risk and Analytic Radiobiological Dose Evaluation.

    PubMed

    Park, Ji-Yeon; Suh, Tae Suk; Lee, Jeong-Woo; Ahn, Kook-Jin; Park, Hae-Jin; Choe, Bo-Young; Hong, Semie

    2015-10-01

    Based on the assumption that apparent diffusion coefficients (ADCs) define high-risk clinical target volume (aCTVHR) in high-grade glioma in a cellularity-dependent manner, the dosimetric effects of aCTVHR-targeted dose optimization were evaluated in two intensity-modulated radiation therapy (IMRT) plans. Diffusion-weighted magnetic resonance (MR) images and ADC maps were analyzed qualitatively and quantitatively to determine aCTVHR in a high-grade glioma with high cellularity. After confirming tumor malignancy using the average and minimum ADCs and ADC ratios, the aCTVHR with double- or triple-restricted water diffusion was defined on computed tomography images through image registration. Doses to the aCTVHR and CTV defined on T1-weighted MR images were optimized using a simultaneous integrated boost technique. The dosimetric benefits for CTVs and organs at risk (OARs) were compared using dose volume histograms and various biophysical indices in an ADC map-based IMRT (IMRTADC) plan and a conventional IMRT (IMRTconv) plan. The IMRTADC plan improved dose conformity up to 15 times, compared to the IMRTconv plan. It reduced the equivalent uniform doses in the visual system and brain stem by more than 10% and 16%, respectively. The ADC-based target differentiation and dose optimization may facilitate conformal dose distribution to the aCTVHR and OAR sparing in an IMRT plan. PMID:26425053

  20. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    1988-01-01

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimization method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  1. A Third-Generation Evidence Base for Human Spaceflight Risks

    NASA Technical Reports Server (NTRS)

    Kundrot, Craig E.; Lumpkins, Sarah; Steil, Jennifer; Pellis, Neal; Charles, John

    2014-01-01

    NASA's Human Research Program seeks to understand and mitigate risks to crew health and performance in exploration missions center dot HRP's evidence base consists of an Evidence Report for each HRP risk center dot Three generations of Evidence Reports 1) Review articles + Good content - Limited authorship, infrequent updates 2) Wikipedia articles + Viewed often, very open to contributions - Summary of reviews, very few contributions 3) HRP-controlled wiki articles + Incremental additions to review articles with editorial control

  2. The future of population-based postmarket drug risk assessment: a regulator's perspective.

    PubMed

    Hammad, T A; Neyarapally, G A; Iyasu, S; Staffa, J A; Dal Pan, G

    2013-09-01

    The US Food and Drug Administration emphasizes the role of regulatory science in the fulfillment of its mission to promote and protect public health and foster innovation. With respect to the evaluation of drug effects in the real world, regulatory science plays an important role in drug risk assessment and management. This article discusses opportunities and challenges with population-based drug risk assessment as well as related regulatory science knowledge gaps in the following areas: (i) population-based data sources and methods to evaluate drug safety issues; (ii) evidence-based thresholds to account for uncertainty in postmarket data; (iii) approaches to optimize the integration and interpretation of evidence from different sources; and (iv) approaches to evaluate the real-world impact of regulatory decisions. Regulators should continue the ongoing dialogue with multiple stakeholders to strengthen regulatory safety science and address these and other critical knowledge gaps. PMID:23739537

  3. Reliability-based robust design optimization of vehicle components, Part II: Case studies

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2015-06-01

    The reliability-based optimization, the reliability- based sensitivity analysis and robust design method are employed to propose an effective approach for reliability-based robust design optimization of vehicle components in Part I. Applications of the method are further discussed for reliability-based robust optimization of vehicle components in this paper. Examples of axles, torsion bar, coil and composite springs are illustrated for numerical investigations. Results have shown the proposed method is an efficient method for reliability-based robust design optimization of vehicle components.

  4. Risk-based testing of imported animals: A case study for bovine tuberculosis in The Netherlands.

    PubMed

    de Vos, Clazien J; van der Goot, Jeanet A; van Zijderveld, Fred G; Swanenburg, Manon; Elbers, Armin R W

    2015-09-01

    In intra-EU trade, the health status of animals is warranted by issuing a health certificate after clinical inspection in the exporting country. This certificate cannot provide guarantee of absence of infection, especially not for diseases with a long incubation period and no overt clinical signs such as bovine tuberculosis (bTB). The Netherlands are officially free from bTB since 1999. However, frequent reintroductions occurred in the past 15 years through importation of infected cattle. Additional testing (AT) of imported cattle could enhance the probability of detecting an imported bTB infection in an early stage. The goal of this study was to evaluate the effectiveness of risk-based AT for bTB in cattle imported into The Netherlands. A generic stochastic import risk model was developed that simulates introduction of infection into an importing country through importation of live animals. Main output parameters are the number of infected animals that is imported (Ninf), the number of infected animals that is detected by testing (Ndet), and the economic losses incurred by importing infected animals (loss). The model was parameterized for bTB. Model calculations were optimized to either maximize Ndet or to minimize loss. Model results indicate that the risk of bTB introduction into The Netherlands is very high. For the current situation in which Dutch health checks on imported cattle are limited to a clinical inspection of a random sample of 5-10% of imported animals, the calculated annual Ninf=99 (median value). Random AT of 8% of all imported cattle results in Ndet=7 (median value), while the median Ndet=75 if the sampling strategy for AT is optimized to maximize Ndet. However, in the latter scenario, loss is more than twice as large as in the current situation, because only calves are tested for which cost of detection is higher than the expected gain of preventing a possible outbreak. When optimizing the sampling strategy for AT to minimize loss, only breeding

  5. Formation mechanisms and optimization of trap-based positron beams

    NASA Astrophysics Data System (ADS)

    Natisin, M. R.; Danielson, J. R.; Surko, C. M.

    2016-02-01

    Described here are simulations of pulsed, magnetically guided positron beams formed by ejection from Penning-Malmberg-style traps. In a previous paper [M. R. Natisin et al., Phys. Plasmas 22, 033501 (2015)], simulations were developed and used to describe the operation of an existing trap-based beam system and provided good agreement with experimental measurements. These techniques are used here to study the processes underlying beam formation in more detail and under more general conditions, therefore further optimizing system design. The focus is on low-energy beams (˜eV) with the lowest possible spread in energies (<10 meV), while maintaining microsecond pulse durations. The simulations begin with positrons trapped within a potential well and subsequently ejected by raising the bottom of the trapping well, forcing the particles over an end-gate potential barrier. Under typical conditions, the beam formation process is intrinsically dynamical, with the positron dynamics near the well lip, just before ejection, particularly crucial to setting beam quality. In addition to an investigation of the effects of beam formation on beam quality under typical conditions, two other regimes are discussed; one occurring at low positron temperatures in which significantly lower energy and temporal spreads may be obtained, and a second in cases where the positrons are ejected on time scales significantly faster than the axial bounce time, which results in the ejection process being essentially non-dynamical.

  6. A novel Retinex algorithm based on alternating direction optimization

    NASA Astrophysics Data System (ADS)

    Fu, Xueyang; Lin, Qin; Guo, Wei; Huang, Yue; Zeng, Delu; Ding, Xinghao

    2013-10-01

    The goal of the Retinex theory is to removed the effects of illumination from the observed images. To address this typical ill-posed inverse problem, many existing Retinex algorithms obtain an enhanced image by using different assumptions either on the illumination or on the reflectance. One significant limitation of these Retinex algorithms is that if the assumption is false, the result is unsatisfactory. In this paper, we firstly build a Retinex model which includes two variables: the illumination and the reflectance. We propose an efficient and effective algorithm based on alternating direction optimization to solve this problem where FFT (Fast Fourier Transform) is used to speed up the computation. Comparing with most existing Retinex algorithms, the proposed method solve the illumination image and reflectance image without converting images to the logarithmic domain. One of the advantages in this paper is that, unlike other traditional Retinex algorithms, our method can simultaneously estimate the illumination image and the reflectance image, the later of which is the ideal image without the illumination effect. Since our method can directly separate the illumination and the reflectance, and the two variables constrain each other mutually in the computing process, the result is robust to some degree. Another advantage is that our method has less computational cost and can be applied to real-time processing.

  7. Optimizing the Growth of (111) Diamond for Diamond Based Magnetometry

    NASA Astrophysics Data System (ADS)

    Kamp, Eric; Godwin, Patrick; Samarth, Nitin; Snyder, David; de Las Casas, Charles; Awschalom, David D.

    Magnetometers based on nitrogen vacancy (NV) ensembles have recently achieved sub-picotesla sensitivities [Phys. Rev. X 5, 041001(2015)], putting the technique on par with SQUID and MFM magnetometry.Typically these sensors use (100) oriented diamond with NV centers forming along all four (111) crystal orientations.This allows for vector magnetometry, but is a hindrance to the absolute sensitivity. Diamond grown on (111) oriented substrates through microwave plasma enhanced chemical vapor deposition(MP-CVD) provides a promising route in this context since such films can exhibit preferential orientation greater than 99% [Appl. Phys. Lett.104, 102407 (2014)]. An important challenge though is to achieve sufficiently high NV center densities required for enhancing the sensitivity of an NV ensemble magnetometer.We report systematic studies of the MP-CVD growth and characterization of (111) oriented diamond, where we vary growth temperature, methane concentration, and nitrogen doping. For each film we study the Nitrogen to NV ratio, the NV- to NV0 ratio, and alignment percentage to minimize sources of decoherence and ensure preferential alignment. From these measurements we determine the optimal growth parameters for high sensitivity, NV center ensemble scalar magnetometry. Funded by NSF-DMR.

  8. On the Optimization of GLite-Based Job Submission

    NASA Astrophysics Data System (ADS)

    Misurelli, Giuseppe; Palmieri, Francesco; Pardi, Silvio; Veronesi, Paolo

    2011-12-01

    A Grid is a very dynamic, complex and heterogeneous system, whose reliability can be adversely conditioned by several different factors such as communications and hardware faults, middleware bugs or wrong configurations due to human errors. As the infrastructure scales, spanning a large number of sites, each hosting hundreds or thousands of hosts/resources, the occurrence of runtime faults following job submission becomes a very frequent and phenomenon. Therefore, fault avoidance becomes a fundamental aim in modern Grids since the dependability of individual resources spread upon widely distributed computing infrastructures and often used outside of their native organizational boundaries, cannot be guaranteed in any systematic way. Accordingly, we propose a simple job optimization solution based on a user-driven fault avoidance strategy. Such strategy starts from the introduction within the grid information system of several on-line service-monitoring metrics that can be used as specific hints to the workload management system for driving resource discovery operations according to a fault-free resource-scheduling plan. This solution, whose main goal is to minimize the execution time by avoiding execution failures, demonstrated to be very effective in incrementing both the user perceivable quality and the overall grid performance.

  9. Pressure distribution based optimization of phase-coded acoustical vortices

    SciTech Connect

    Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong

    2014-02-28

    Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.

  10. Limitations of Adjoint-Based Optimization for Separated Flows

    NASA Astrophysics Data System (ADS)

    Otero, J. Javier; Sharma, Ati; Sandberg, Richard

    2015-11-01

    Cabin noise is generated by the transmission of turbulent pressure fluctuations through a vibrating panel and can lead to fatigue. In the present study, we model this problem by using DNS to simulate the flow separating off a backward facing step and interacting with a plate downstream of the step. An adjoint formulation of the full compressible Navier-Stokes equations with varying viscosity is used to calculate the optimal control required to minimize the fluid-structure-acoustic interaction with the plate. To achieve noise reduction, a cost function in wavenumber space is chosen to minimize the excitation of the lower structural modes of the structure. To ensure the validity of time-averaged cost functions, it is essential that the time horizon is long enough to be a representative sample of the statistical behaviour of the flow field. The results from the current study show how this scenario is not always feasible for separated flows, because the chaotic behaviour of turbulence surpasses the ability of adjoint-based methods to compute time-dependent sensitivities of the flow.

  11. Optimal grid-based methods for thin film micromagnetics simulations

    NASA Astrophysics Data System (ADS)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  12. Beamlet based direct aperture optimization for MERT using a photon MLC

    SciTech Connect

    Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Joosten, A.; Lössl, K.; Aebersold, D. M.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.

    2014-12-15

    Purpose: A beamlet based direct aperture optimization (DAO) for modulated electron radiotherapy (MERT) using photon multileaf collimator (pMLC) shaped electron fields is developed and investigated. Methods: The Swiss Monte Carlo Plan (SMCP) allows the calculation of dose distributions for pMLC shaped electron beams. SMCP is interfaced with the Eclipse TPS (Varian Medical Systems, Palo Alto, CA) which can thus be included into the inverse treatment planning process for MERT. This process starts with the import of a CT-scan into Eclipse, the contouring of the target and the organs at risk (OARs), and the choice of the initial electron beam directions. For each electron beam, the number of apertures, their energy, and initial shape are defined. Furthermore, the DAO requires dose–volume constraints for the structures contoured. In order to carry out the DAO efficiently, the initial electron beams are divided into a grid of beamlets. For each of those, the dose distribution is precalculated using a modified electron beam model, resulting in a dose list for each beamlet and energy. Then the DAO is carried out, leading to a set of optimal apertures and corresponding weights. These optimal apertures are now converted into pMLC shaped segments and the dose calculation for each segment is performed. For these dose distributions, a weight optimization process is launched in order to minimize the differences between the dose distribution using the optimal apertures and the pMLC segments. Finally, a deliverable dose distribution for the MERT plan is obtained and loaded back into Eclipse for evaluation. For an idealized water phantom geometry, a MERT treatment plan is created and compared to the plan obtained using a previously developed forward planning strategy. Further, MERT treatment plans for three clinical situations (breast, chest wall, and parotid metastasis of a squamous cell skin carcinoma) are created using the developed inverse planning strategy. The MERT plans are

  13. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2016-06-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  14. Aeroelastic Optimization Study Based on X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley; Pak, Chan-Gi

    2014-01-01

    A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. Two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center were presented. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. A hybrid and discretization optimization approach was implemented to improve accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study. The results provide guidance to modify the fabricated flexible wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished.

  15. A simple data base for identification of risk profiles

    SciTech Connect

    Munganahalli, D.

    1996-12-31

    Sedco Forex is a drilling contractor that operates approximately 80 rigs on land and offshore worldwide. The HSE management system developed by Sedco Forex is an effort to prevent accidents and minimize losses. An integral part of the HSE management system is establishing risk profiles and thereby minimizing risk and reducing loss exposures. Risk profiles are established based on accident reports, potential accident reports and other risk identification reports (RIR) like the Du Pont STOP system. A rig could fill in as many as 30 accident reports, 30 potential accident reports and 500 STOP cards each year. Statistics are important for an HSE management system, since they are indicators of success or failure of HSE systems. It is however difficult to establish risk profiles based on statistical information, unless tools are available at the rig site to aid with the analysis. Risk profiles are then used to identify important areas in the operation that may require specific attention to minimize the loss exposure. Programs to address the loss exposure can then be identified and implemented with either a local or corporate approach. In January 1995, Sedco Forex implemented a uniform HSE Database on all the rigs worldwide. In one year companywide, the HSE database would contain information on approximately 500 accident and potential accident reports, and 10,000 STOP cards. This paper demonstrates the salient features of the database and describes how it has helped in establishing key risk profiles. It also shows a recent example of how risk profiles have been established at the corporate level and used to identify the key contributing factors to hands and finger injuries. Based on this information, a campaign was launched to minimize the frequency of occurrence and associated loss attributed to hands and fingers accidents.

  16. On the choice of risk optimal data embedding strategy, safe embedding rate, and passive steganalysis

    NASA Astrophysics Data System (ADS)

    Chandramouli, Rajarathnam

    2005-03-01

    Suppose Alice the information hider wants to send a stego message to Bob in the presence of Wendy the (passive) warden. Wendy employs one of n different passive steganalysis detectors to decide if the data from Alice contains any hidden message before passing it on to Bob. Suppose Alice can choose from a set of information hiding schemes and possesses only an incomplete information about the steganalysis strategy choice of Wendy. That is, suppose Alice only knows an ordering of the probabilities (and not the values themselves), say, p1 >= p2 >= ... >= pn where pj is the probability of Wendy using jth detector. Under this scenario we investigate answers to the following two questions by generalizing a previous result by the author and deriving new ones: (a) how must Alice choose the optimal data hiding strategy subject to risk constraints and (b) what is the maximum safe embedding rate, i.e., maximum message rate that can be embedded without being detected by Wendy? Detailed analysis and numerical results are presented to answer these questions.

  17. Coupling risk-based remediation with innovative technology

    SciTech Connect

    Goodheart, G.F.; Teaf, C.M. |; Manning, M.J.

    1998-05-01

    Tiered risk-based cleanup approaches have been effectively used at petroleum sites, pesticide sites and other commercial/industrial facilities. For example, the Illinois Environmental Protection Agency (IEPA) has promulgated guidance for a Tiered Approach to Corrective action Objectives (TACO) to establish site-specific remediation goals for contaminated soil and groundwater. As in the case of many other state programs, TACO is designed to provide for adequate protection of human health and the environment based on potential risks posed by site conditions. It also incorporates site-related information that may allow more cost-effective remediation. IEPA developed TACO to provide flexibility to site owners/operators when formulating site-specific remediation activities, as well as to hasten property redevelopment to return sites to more productive use. Where appropriate, risk-based cleanup objectives as set by TACO-type programs may be coupled with innovative remediation technologies such as air sparging, bioremediation and soil washing.

  18. Performance of an Adipokine Pathway-Based Multilocus Genetic Risk Score for Prostate Cancer Risk Prediction

    PubMed Central

    Ribeiro, Ricardo J. T.; Monteiro, Cátia P. D.; Azevedo, Andreia S. M.; Cunha, Virgínia F. M.; Ramanakumar, Agnihotram V.; Fraga, Avelino M.; Pina, Francisco M.; Lopes, Carlos M. S.; Medeiros, Rui M.; Franco, Eduardo L.

    2012-01-01

    Few biomarkers are available to predict prostate cancer risk. Single nucleotide polymorphisms (SNPs) tend to have weak individual effects but, in combination, they have stronger predictive value. Adipokine pathways have been implicated in the pathogenesis. We used a candidate pathway approach to investigate 29 functional SNPs in key genes from relevant adipokine pathways in a sample of 1006 men eligible for prostate biopsy. We used stepwise multivariate logistic regression and bootstrapping to develop a multilocus genetic risk score by weighting each risk SNP empirically based on its association with disease. Seven common functional polymorphisms were associated with overall and high-grade prostate cancer (Gleason≥7), whereas three variants were associated with high metastatic-risk prostate cancer (PSA≥20 ng/mL and/or Gleason≥8). The addition of genetic variants to age and PSA improved the predictive accuracy for overall and high-grade prostate cancer, using either the area under the receiver-operating characteristics curves (P<0.02), the net reclassification improvement (P<0.001) and integrated discrimination improvement (P<0.001) measures. These results suggest that functional polymorphisms in adipokine pathways may act individually and cumulatively to affect risk and severity of prostate cancer, supporting the influence of adipokine pathways in the pathogenesis of prostate cancer. Use of such adipokine multilocus genetic risk score can enhance the predictive value of PSA and age in estimating absolute risk, which supports further evaluation of its clinical significance. PMID:22792137

  19. Reducing the Academic Risks of Over-Optimism: The Longitudinal Effects of Attributional Retraining on Cognition and Achievement

    ERIC Educational Resources Information Center

    Haynes, Tara L.; Ruthig, Joelle C.; Perry, Raymond P.; Stupnisky, Robert H.; Hall, Nathan C.

    2006-01-01

    Although optimism is generally regarded as a positive dispositional characteristic, unmitigated optimism can be problematic. The adaptiveness of overly optimistic expectations in novel or unfamiliar settings is questionable because individuals have little relevant experience on which to base such expectations. In this four-phase longitudinal…

  20. Task-based optimization of flip angle for texture analysis in MRI

    NASA Astrophysics Data System (ADS)

    Brand, Jonathan F.; Furenlid, Lars R.; Altbach, Maria I.; Galons, Jean-Phillippe; Bhattacharyya, Achyut; Sharma, Puneet; Bhattacharyya, Tulshi; Bilgin, Ali; Martin, Diego R.

    2016-03-01

    Chronic liver disease is a worldwide health problem, and hepatic fibrosis (HF) is one of the hallmarks of the disease. The current reference standard for diagnosing HF is biopsy followed by pathologist examination, however this is limited by sampling error and carries risk of complications. Pathology diagnosis of HF is based on textural change in the liver as a lobular collagen network that develops within portal triads. The scale of collagen lobules is characteristically on order of 1-5 mm, which approximates the resolution limit of in vivo gadolinium-enhanced magnetic resonance imaging in the delayed phase. We have shown that MRI of formalin fixed human ex vivo liver samples mimic the textural contrast of in vivo Gd-MRI and can be used as MRI phantoms. We have developed local texture analysis that is applied to phantom images, and the results are used to train model observers. The performance of the observer is assessed with the area-under-the-receiveroperator- characteristic curve (AUROC) as the figure of merit. To optimize the MRI pulse sequence, phantoms are scanned with multiple times at a range of flip angles. The flip angle that associated with the highest AUROC is chosen as optimal based on the task of detecting HF.

  1. Biological-based optimization and volumetric modulated arc therapy delivery for stereotactic body radiation therapy

    SciTech Connect

    Diot, Quentin; Kavanagh, Brian; Timmerman, Robert; Miften, Moyed

    2012-01-15

    Purpose: To describe biological-based optimization and Monte Carlo (MC) dose calculation-based treatment planning for volumetric modulated arc therapy (VMAT) delivery of stereotactic body radiation therapy (SBRT) in lung, liver, and prostate patients. Methods: Optimization strategies and VMAT planning parameters using a biological-based optimization MC planning system were analyzed for 24 SBRT patients. Patients received a median dose of 45 Gy [range, 34-54 Gy] for lung tumors in 1-5 fxs and a median dose of 52 Gy [range, 48-60 Gy] for liver tumors in 3-6 fxs. Prostate patients received a fractional dose of 10 Gy in 5 fxs. Biological-cost functions were used for plan optimization, and its dosimetric quality was evaluated using the conformity index (CI), the conformation number (CN), the ratio of the volume receiving 50% of the prescription dose over the planning target volume (Rx/PTV50). The quality and efficiency of the delivery were assessed according to measured quality assurance (QA) passing rates and delivery times. For each disease site, one patient was replanned using physical cost function and compared to the corresponding biological plan. Results: Median CI, CN, and Rx/PTV50 for all 24 patients were 1.13 (1.02-1.28), 0.79 (0.70-0.88), and 5.3 (3.1-10.8), respectively. The median delivery rate for all patients was 410 MU/min with a maximum possible rate of 480 MU/min (85%). Median QA passing rate was 96.7%, and it did not significantly vary with the tumor site. Conclusions: VMAT delivery of SBRT plans optimized using biological-motivated cost-functions result in highly conformal dose distributions. Plans offer shorter treatment-time benefits and provide efficient dose delivery without compromising the plan conformity for tumors in the prostate, lung, and liver, thereby improving patient comfort and clinical throughput. The short delivery times minimize the risk of patient setup and intrafraction motion errors often associated with long SBRT treatment

  2. Is the 90th Percentile Adequate? The Optimal Waist Circumference Cutoff Points for Predicting Cardiovascular Risks in 124,643 15-Year-Old Taiwanese Adolescents

    PubMed Central

    Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah

    2016-01-01

    Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833–0.857) and girls (AUC: 0.763, 95% CI: 0.731–0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65–5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents’ height, weight and WC should be measured as a standard practice in routine health checkups. PMID:27389572

  3. Is the 90th Percentile Adequate? The Optimal Waist Circumference Cutoff Points for Predicting Cardiovascular Risks in 124,643 15-Year-Old Taiwanese Adolescents.

    PubMed

    Lee, Jason Jiunshiou; Ho, ChinYu; Chen, Hsin-Jen; Huang, Nicole; Yeh, Jade Chienyu; deFerranti, Sarah

    2016-01-01

    Adolescent obesity has increased to alarming proportions globally. However, few studies have investigated the optimal waist circumference (WC) of Asian adolescents. This study sought to establish the optimal WC cutoff points that identify a cluster of cardiovascular risk factors (CVRFs) among 15-year-old ethnically Chinese adolescents. This study was a regional population-based study on the CVRFs among adolescents who enrolled in all the senior high schools in Taipei City, Taiwan, between 2011 and 2014. Four cross-sectional health examinations of first-year senior high school (grade 10) students were conducted from September to December of each year. A total of 124,643 adolescents aged 15 (boys: 63,654; girls: 60,989) were recruited. Participants who had at least three of five CVRFs were classified as the high-risk group. We used receiver-operating characteristic curves and the area under the curve (AUC) to determine the optimal WC cutoff points and the accuracy of WC in predicting high cardiovascular risk. WC was a good predictor for high cardiovascular risk for both boys (AUC: 0.845, 95% confidence interval [CI]: 0.833-0.857) and girls (AUC: 0.763, 95% CI: 0.731-0.795). The optimal WC cutoff points were ≥78.9 cm for boys (77th percentile) and ≥70.7 cm for girls (77th percentile). Adolescents with normal weight and an abnormal WC were more likely to be in the high cardiovascular risk group (odds ratio: 3.70, 95% CI: 2.65-5.17) compared to their peers with normal weight and normal WC. The optimal WC cutoff point of 15-year-old Taiwanese adolescents for identifying CVRFs should be the 77th percentile; the 90th percentile of the WC might be inadequate. The high WC criteria can help health professionals identify higher proportion of the adolescents with cardiovascular risks and refer them for further evaluations and interventions. Adolescents' height, weight and WC should be measured as a standard practice in routine health checkups. PMID:27389572

  4. Risk-based audit selection of dairy farms.

    PubMed

    van Asseldonk, M A P M; Velthuis, A G J

    2014-02-01

    Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently. PMID:24290823

  5. Risk-based principles for defining and managing water security

    PubMed Central

    Hall, Jim; Borgomeo, Edoardo

    2013-01-01

    The concept of water security implies concern about potentially harmful states of coupled human and natural water systems. Those harmful states may be associated with water scarcity (for humans and/or the environment), floods or harmful water quality. The theories and practices of risk analysis and risk management have been developed and elaborated to deal with the uncertain occurrence of harmful events. Yet despite their widespread application in public policy, theories and practices of risk management have well-known limitations, particularly in the context of severe uncertainties and contested values. Here, we seek to explore the boundaries of applicability of risk-based principles as a means of formalizing discussion of water security. Not only do risk concepts have normative appeal, but they also provide an explicit means of addressing the variability that is intrinsic to hydrological, ecological and socio-economic systems. We illustrate the nature of these interconnections with a simulation study, which demonstrates how water resources planning could take more explicit account of epistemic uncertainties, tolerability of risk and the trade-offs in risk among different actors. PMID:24080616

  6. Risk-based principles for defining and managing water security.

    PubMed

    Hall, Jim; Borgomeo, Edoardo

    2013-11-13

    The concept of water security implies concern about potentially harmful states of coupled human and natural water systems. Those harmful states may be associated with water scarcity (for humans and/or the environment), floods or harmful water quality. The theories and practices of risk analysis and risk management have been developed and elaborated to deal with the uncertain occurrence of harmful events. Yet despite their widespread application in public policy, theories and practices of risk management have well-known limitations, particularly in the context of severe uncertainties and contested values. Here, we seek to explore the boundaries of applicability of risk-based principles as a means of formalizing discussion of water security. Not only do risk concepts have normative appeal, but they also provide an explicit means of addressing the variability that is intrinsic to hydrological, ecological and socio-economic systems. We illustrate the nature of these interconnections with a simulation study, which demonstrates how water resources planning could take more explicit account of epistemic uncertainties, tolerability of risk and the trade-offs in risk among different actors. PMID:24080616

  7. An Etching Yield Parameters Optimization Method Based on Ordinal Optimization and Tabu Search Hybrid Algorithm

    NASA Astrophysics Data System (ADS)

    Ruan, Cong; Sun, Xiao-Min; Song, Yi-Xu

    In this paper, we propose a method to optimize etching yield parameters. By means of defining a fitness function between the actual etching profile and the simulation profile, the etching yield parameters solving problem is transformed into an optimization problem. The problem is nonlinear and high dimensional, and each simulation is computationally expensive. To solve this problem, we need to search a better solution in a multidimensional space. Ordinal optimization and tabu search hybrid algorithm is introduced to solve this complex problem. This method ensures getting good enough solution in an acceptable time. The experimental results illustrate that simulation profile obtained by this method is very similar with the actual etching profile in surface topography. It also proves that our proposed method has feasibility and validity.

  8. Comparative assessment of absolute cardiovascular disease risk characterization from non-laboratory-based risk assessment in South African populations

    PubMed Central

    2013-01-01

    Background All rigorous primary cardiovascular disease (CVD) prevention guidelines recommend absolute CVD risk scores to identify high- and low-risk patients, but laboratory testing can be impractical in low- and middle-income countries. The purpose of this study was to compare the ranking performance of a simple, non-laboratory-based risk score to laboratory-based scores in various South African populations. Methods We calculated and compared 10-year CVD (or coronary heart disease (CHD)) risk for 14,772 adults from thirteen cross-sectional South African populations (data collected from 1987 to 2009). Risk characterization performance for the non-laboratory-based score was assessed by comparing rankings of risk with six laboratory-based scores (three versions of Framingham risk, SCORE for high- and low-risk countries, and CUORE) using Spearman rank correlation and percent of population equivalently characterized as ‘high’ or ‘low’ risk. Total 10-year non-laboratory-based risk of CVD death was also calculated for a representative cross-section from the 1998 South African Demographic Health Survey (DHS, n = 9,379) to estimate the national burden of CVD mortality risk. Results Spearman correlation coefficients for the non-laboratory-based score with the laboratory-based scores ranged from 0.88 to 0.986. Using conventional thresholds for CVD risk (10% to 20% 10-year CVD risk), 90% to 92% of men and 94% to 97% of women were equivalently characterized as ‘high’ or ‘low’ risk using the non-laboratory-based and Framingham (2008) CVD risk score. These results were robust across the six risk scores evaluated and the thirteen cross-sectional datasets, with few exceptions (lower agreement between the non-laboratory-based and Framingham (1991) CHD risk scores). Approximately 18% of adults in the DHS population were characterized as ‘high CVD risk’ (10-year CVD death risk >20%) using the non-laboratory-based score. Conclusions We found a high level of

  9. Globally Optimal Base Station Clustering in Interference Alignment-Based Multicell Networks

    NASA Astrophysics Data System (ADS)

    Brandt, Rasmus; Mochaourab, Rami; Bengtsson, Mats

    2016-04-01

    Coordinated precoding based on interference alignment is a promising technique for improving the throughputs in future wireless multicell networks. In small networks, all base stations can typically jointly coordinate their precoding. In large networks however, base station clustering is necessary due to the otherwise overwhelmingly high channel state information (CSI) acquisition overhead. In this work, we provide a branch and bound algorithm for finding the globally optimal base station clustering. The algorithm is mainly intended for benchmarking existing suboptimal clustering schemes. We propose a general model for the user throughputs, which only depends on the long-term CSI statistics. The model assumes intracluster interference alignment and is able to account for the CSI acquisition overhead. By enumerating a search tree using a best-first search and pruning sub-trees in which the optimal solution provably cannot be, the proposed method converges to the optimal solution. The pruning is done using specifically derived bounds, which exploit some assumed structure in the throughput model. It is empirically shown that the proposed method has an average complexity which is orders of magnitude lower than that of exhaustive search.

  10. Community-based randomised controlled trial evaluating falls and osteoporosis risk management strategies

    PubMed Central

    Ciaschini, PM; Straus, SE; Dolovich, LR; Goeree, RA; Leung, KM; Woods, CR; Zimmerman, GM; Majumdar, SR; Spadafora, S; Fera, LA; Lee, HN

    2008-01-01

    Background Osteoporosis-related fractures are a significant public health concern. Interventions that increase detection and treatment of osteoporosis, as well as prevention of fractures and falls, are substantially underutilized. This paper outlines the protocol for a pragmatic randomised trial of a multifaceted community-based care program aimed at optimizing the evidence-based management of falls and fractures in patients at risk. Design 6-month randomised controlled study. Methods This population-based study was completed in the Algoma District of Ontario, Canada a geographically vast area with Sault Ste Marie (population 78 000) as its main city. Eligible patients were allocated to an immediate intervention protocol (IP) group, or a delayed intervention protocol (DP) group. The DP group received usual care for 6 months and then was crossed over to receive the interventions. Components of the intervention were directed at the physicians and their patients and included patient-specific recommendations for osteoporosis therapy as outlined by the clinical practice guidelines developed by Osteoporosis Canada, and falls risk assessment and treatment. Two primary outcomes were measured including implementation of appropriate osteoporosis and falls risk management. Secondary outcomes included quality of life and the number of falls, fractures, and hospital admissions over a twelve-month period. The patient is the unit of allocation and analysis. Analyses will be performed on an intention to treat basis. Discussion This paper outlines the protocol for a pragmatic randomised trial of a multi-faceted, community-based intervention to optimize the implementation of evidence based management for patients at risk for falls and osteoporosis. Trial Registration This trial has been registered with clinicaltrials.gov (ID: NCT00465387) PMID:18983670

  11. Prototype Biology-Based Radiation Risk Module Project

    NASA Technical Reports Server (NTRS)

    Terrier, Douglas; Clayton, Ronald G.; Patel, Zarana; Hu, Shaowen; Huff, Janice

    2015-01-01

    Biological effects of space radiation and risk mitigation are strategic knowledge gaps for the Evolvable Mars Campaign. The current epidemiology-based NASA Space Cancer Risk (NSCR) model contains large uncertainties (HAT #6.5a) due to lack of information on the radiobiology of galactic cosmic rays (GCR) and lack of human data. The use of experimental models that most accurately replicate the response of human tissues is critical for precision in risk projections. Our proposed study will compare DNA damage, histological, and cell kinetic parameters after irradiation in normal 2D human cells versus 3D tissue models, and it will use a multi-scale computational model (CHASTE) to investigate various biological processes that may contribute to carcinogenesis, including radiation-induced cellular signaling pathways. This cross-disciplinary work, with biological validation of an evolvable mathematical computational model, will help reduce uncertainties within NSCR and aid risk mitigation for radiation-induced carcinogenesis.

  12. Recent experiences using finite-element-based structural optimization

    NASA Technical Reports Server (NTRS)

    Paul, B. K.; Mcconnell, J. C.; Love, Mike H.

    1989-01-01

    Structural optimization has been available to the structural analysis community as a tool for many years. The popular use of displacement method finite-element techniques to analyze linearly elastic structures has resulted in an ability to calculate the weight and constraint gradients inexpensively for numerical optimization of structures. Here, recent experiences in the investigation and use of structural optimization are discussed. In particular, experience with the commercially available ADS/NASOPT code is addressed. An overview of the ADS/NASOPT procedure and how it was implemented is given. Two example problems are also discussed.

  13. Long-term Failure Prediction based on an ARP Model of Global Risk Network

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Moussawi, Alaa; Szymanski, Boleslaw; Korniss, Gyorgy

    Risks that threaten modern societies form an intricately interconnected network. Hence, it is important to understand how risk materializations in distinct domains influence each other. In the paper, we study the global risks network defined by World Economic Forum experts in the form of Stochastic Block Model. We model risks as Alternating Renewal Processes with variable intensities driven by hidden values of exogenous and endogenous failure probabilities. Based on the expert assessments and historical status of each risk, we use Maximum Likelihood Evaluation to find the optimal model parameters and demonstrate that the model considering network effects significantly outperforms the others. In the talk, we discuss how the model can be used to provide quantitative means for measuring interdependencies and materialization of risks in the network. We also present recent results of long-term predictions in the form of predicated distributions of materializations over various time periods. Finally we show how the simulation of ARP's enables us to probe limits of the predictability of the system parameters from historical data and ability to recover hidden variable. Supported in part by DTRA, ARL NS-CTA.

  14. The spatial optimism model research for the regional land use based on the ecological constraint

    NASA Astrophysics Data System (ADS)

    XU, K.; Lu, J.; Chi, Y.

    2013-12-01

    The study focuses on the Yunnan-Guizhou (i.e. Yunnan province and Guizhou province) Plateau in China. Since the Yunnan-Guizhou region consists of closed basins, the land resources suiting for development are in a shortage, and the ecological problems in the area are quite complicated. In such circumstance, in order to get the applicable basins area and distribution, certain spatial optimism model is needed. In this research, Digital Elevation Model (DEM) and land use data are used to get the boundary rules of the basins distribution. Furthermore, natural risks, ecological risks and human-made ecological risks are integrated to be analyzed. Finally, the spatial overlay analysis method is used to model the developable basins area and distribution for industries and urbanization. The study process can be divided into six steps. First, basins and their distribution need to be recognized. In this way, the DEM data is used to extract the geomorphology characteristics. The plaque regions with gradient under eight degrees are selected. Among these regions, the total area of the plaque with the area above 8 km2 is 54,000 km2, 10% of the total area. These regions are selected to the potential application of industries and urbanization. In the later five steps, analyses are aimed at these regions. Secondly, the natural risks are analyzed. The conditions of the earthquake, debris flow and rainstorm and flood are combined to classify the natural risks. Thirdly, the ecological risks are analyzed containing the ecological sensibility and ecosystem service function importance. According to the regional ecologic features, the sensibility containing the soil erosion, acid rain, stony desertification and survive condition factors is derived and classified according to the medium value to get the ecological sensibility partition. The ecosystem service function importance is classified and divided considering the biology variation protection and water conservation factors. The fourth

  15. Study of operational risk-based configuration control

    SciTech Connect

    Vesely, W E; Samanta, P K; Kim, I S

    1991-08-01

    This report studies aspects of a risk-based configuration control system to detect and control plant configurations from a risk perspective. Configuration control, as the term is used here, is the management of component configurations to achieve specific objectives. One important objective is to control risk and safety. Another is to operate efficiently and make effective use of available resources. PSA-based evaluations are performed to study configuration to core-melt frequency and core-melt probability for two plants. Some equipment configurations can cause large core-melt frequency and there are a number of such configurations that are not currently controlled by technical specifications. However, the expected frequency of occurrence of the impacting configurations is small and the core-melt probability contributions are also generally small. The insights from this evaluation are used to develop the framework for an effective risk-based configuration control system. The focal points of such a system and the requirements for tools development for implementing the system are defined. The requirements of risk models needed for the system, and the uses of plant-specific data are also discussed. 18 refs., 25 figs., 10 tabs.

  16. Optimal design and selection of magneto-rheological brake types based on braking torque and mass

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Lang, V. T.; Choi, S. B.

    2015-06-01

    In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass.

  17. Optimal attack strategy of complex networks based on tabu search

    NASA Astrophysics Data System (ADS)

    Deng, Ye; Wu, Jun; Tan, Yue-jin

    2016-01-01

    The problem of network disintegration has broad applications and recently has received growing attention, such as network confrontation and disintegration of harmful networks. This paper presents an optimized attack strategy model for complex networks and introduces the tabu search into the network disintegration problem to identify the optimal attack strategy, which is a heuristic optimization algorithm and rarely applied to the study of network robustness. The efficiency of the proposed solution was verified by comparing it with other attack strategies used in various model networks and real-world network. Numerical experiments suggest that our solution can improve the effect of network disintegration and that the "best" choice for node failure attacks can be identified through global searches. Our understanding of the optimal attack strategy may also shed light on a new property of the nodes within network disintegration and deserves additional study.

  18. Optimal generation scheduling based on AHP/ANP.

    PubMed

    Momoh, J A; Zhu, Jizhong

    2003-01-01

    This paper proposes an application of the analytic hierarchy process (AHP) and analytic network process (ANP) for enhancing the selection of generating power units for appropriate price allocation in a competitive power environment. The scheme addresses adequate ranking, prioritizing, and scheduling of units before optimizing the pricing of generation units to meet a given demand. In the deregulated environment, the classical optimization techniques will be insufficient for the above-mentioned purpose. Hence, by incorporating the interaction of factors such as load demand, generating cost curve, bid/sale price, unit up/down cost, and the relative importance of different generation units, the scheme can be implemented to address the technical and nontechnical constraints in unit commitment problems. This information is easily augmented with the optimization scheme for an effective optimal decision. The scheme proposed is tested using the IEEE 39-bus test system. PMID:18238201

  19. Model-Based Optimization for Flapping Foil Actuation

    NASA Astrophysics Data System (ADS)

    Izraelevitz, Jacob; Triantafyllou, Michael

    2014-11-01

    Flapping foil actuation in nature, such as wings and flippers, often consist of highly complex joint kinematics which present an impossibly large parameter space for designing bioinspired mechanisms. Designers therefore often build a simplified model to limit the parameter space so an optimum motion trajectory can be experimentally found, or attempt to replicate exactly the joint geometry and kinematics of a suitable organism whose behavior is assumed to be optimal. We present a compromise: using a simple local fluids model to guide the design of optimized trajectories through a succession of experimental trials, even when the parameter space is too large to effectively search. As an example, we illustrate an optimization routine capable of designing asymmetric flapping trajectories for a large aspect-ratio pitching and heaving foil, with the added degree of freedom of allowing the foil to move parallel to flow. We then present PIV flow visualizations of the optimized trajectories.

  20. Electronic Nose Based on an Optimized Competition Neural Network

    PubMed Central

    Men, Hong; Liu, Haiyan; Pan, Yunpeng; Wang, Lei; Zhang, Haiping

    2011-01-01

    In view of the fact that there are disadvantages in that the class number must be determined in advance, the value of learning rates are hard to fix, etc., when using traditional competitive neural networks (CNNs) in electronic noses (E-noses), an optimized CNN method was presented. The optimized CNN was established on the basis of the optimum class number of samples according to the changes of the Davies and Bouldin (DB) value and it could increase, divide, or delete neurons in order to adjust the number of neurons automatically. Moreover, the learning rate changes according to the variety of training times of each sample. The traditional CNN and the optimized CNN were applied to five kinds of sorted vinegars with an E-nose. The results showed that optimized network structures could adjust the number of clusters dynamically and resulted in good classifications. PMID:22163887

  1. PERSPECTIVE: Technical fixes and climate change: optimizing for risks and consequences

    NASA Astrophysics Data System (ADS)

    Rasch, Philip J.

    2010-09-01

    Scientists and society in general are becoming increasingly concerned about the risks of climate change from the emission of greenhouse gases (IPCC 2007). Yet emissions continue to increase (Raupach et al 2007), and achieving reductions soon enough to avoid large and undesirable impacts requires a near-revolutionary global transformation of energy and transportation systems (Hoffert et al 1998). The size of the transformation and lack of an effective societal response have motivated some to explore other quite controversial strategies to mitigate some of the planetary consequences of these emissions. These strategies have come to be known as geoengineering: 'the deliberate manipulation of the planetary environment to counteract anthropogenic climate change' (Keith 2000). Concern about society's inability to reduce emissions has driven a resurgence in interest in geoengineering, particularly following the call for more research in Crutzen (2006). Two classes of geoengineering solutions have developed: (1) methods to draw CO2 out of the atmosphere and sequester it in a relatively benign form; and (2) methods that change the energy flux entering or leaving the planet without modifying CO2 concentrations by, for example, changing the planetary albedo. Only the latter methods are considered here. Summaries of many of the methods, scientific questions, and issues of testing and implementation are discussed in Launder and Thompson (2009) and Royal Society (2009). The increased attention indicates that geoengineering is not a panacea and all strategies considered will have risks and consequences (e.g. Robock 2008, Trenberth and Dai 2007). Recent studies involving comprehensive Earth system models can provide insight into subtle interactions between components of the climate system. For example Rasch et al (2009) found that geoengineering by changing boundary clouds will not simultaneously 'correct' global averaged surface temperature, precipitation, and sea ice to present

  2. Approach to proliferation risk assessment based on multiple objective analysis framework

    SciTech Connect

    Andrianov, A.; Kuptsov, I.

    2013-07-01

    The approach to the assessment of proliferation risk using the methods of multi-criteria decision making and multi-objective optimization is presented. The approach allows the taking into account of the specifics features of the national nuclear infrastructure, and possible proliferation strategies (motivations, intentions, and capabilities). 3 examples of applying the approach are shown. First, the approach has been used to evaluate the attractiveness of HEU (high enriched uranium)production scenarios at a clandestine enrichment facility using centrifuge enrichment technology. Secondly, the approach has been applied to assess the attractiveness of scenarios for undeclared production of plutonium or HEU by theft of materials circulating in nuclear fuel cycle facilities and thermal reactors. Thirdly, the approach has been used to perform a comparative analysis of the structures of developing nuclear power systems based on different types of nuclear fuel cycles, the analysis being based on indicators of proliferation risk.

  3. Optimal planning of LEO active debris removal based on hybrid optimal control theory

    NASA Astrophysics Data System (ADS)

    Yu, Jing; Chen, Xiao-qian; Chen, Li-hu

    2015-06-01

    The mission planning of Low Earth Orbit (LEO) active debris removal problem is studied in this paper. Specifically, the Servicing Spacecraft (SSc) and several debris exist on near-circular near-coplanar LEOs. The SSc should repeatedly rendezvous with the debris, and de-orbit them until all debris are removed. Considering the long-duration effect of J2 perturbation, a linear dynamics model is used for each rendezvous. The purpose of this paper is to find the optimal service sequence and rendezvous path with minimum total rendezvous cost (Δv) for the whole mission, and some complex constraints (communication time window constraint, terminal state constraint, and time distribution constraint) should be satisfied meanwhile. Considering this mission as a hybrid optimal control problem, a mathematical model is proposed, as well as the solution method. The proposed approach is demonstrated by a typical active debris removal problem. Numerical experiments show that (1) the model and solution method proposed in this paper can effectively address the planning problem of LEO debris removal; (2) the communication time window constraint and the J2 perturbation have considerable influences on the optimization results; and (3) under the same configuration, some suboptimal sequences are equivalent to the optimal one since their difference in Δv cost is very small.

  4. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model.

    PubMed

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-09-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP. PMID:24613939

  5. Relationship of optimism and suicidal ideation in three groups of patients at varying levels of suicide risk.

    PubMed

    Huffman, Jeff C; Boehm, Julia K; Beach, Scott R; Beale, Eleanor E; DuBois, Christina M; Healy, Brian C

    2016-06-01

    Optimism has been associated with reduced suicidal ideation, but there have been few studies in patients at high suicide risk. We analyzed data from three study populations (total N = 319) with elevated risk of suicide: (1) patients with a recent acute cardiovascular event, (2) patients hospitalized for heart disease who had depression or an anxiety disorder, and (3) patients psychiatrically hospitalized for suicidal ideation or following a suicide attempt. For each study we analyzed the association between optimism (measured by the Life-Orientation Test-Revised) and suicidal ideation, and then completed an exploratory random effects meta-analysis of the findings to synthesize this data. The meta-analysis of the three studies showed that higher levels of self-reported optimism were associated with a lower likelihood of suicidal ideation (odds ratio [OR] = .89, 95% confidence interval [CI] = .85-.95, z = 3.94, p < .001), independent of age, gender, and depressive symptoms. This association held when using the subscales of the Life Orientation Test-Revised scale that measured higher optimism (OR = .84, 95% CI = .76-.92, z = 3.57, p < .001) and lower pessimism (OR = .83, 95% CI = .75-.92], z = 3.61, p < .001). These results also held when suicidal ideation was analyzed as an ordinal variable. Our findings suggest that optimism may be associated with a lower risk of suicidal ideation, above and beyond the effects of depressive symptoms, for a wide range of patients with clinical conditions that place them at elevated risk for suicide. PMID:26994340

  6. Optimizing nanomedicine pharmacokinetics using physiologically based pharmacokinetics modelling

    PubMed Central

    Moss, Darren Michael; Siccardi, Marco

    2014-01-01

    The delivery of therapeutic agents is characterized by numerous challenges including poor absorption, low penetration in target tissues and non-specific dissemination in organs, leading to toxicity or poor drug exposure. Several nanomedicine strategies have emerged as an advanced approach to enhance drug delivery and improve the treatment of several diseases. Numerous processes mediate the pharmacokinetics of nanoformulations, with the absorption, distribution, metabolism and elimination (ADME) being poorly understood and often differing substantially from traditional formulations. Understanding how nanoformulation composition and physicochemical properties influence drug distribution in the human body is of central importance when developing future treatment strategies. A helpful pharmacological tool to simulate the distribution of nanoformulations is represented by physiologically based pharmacokinetics (PBPK) modelling, which integrates system data describing a population of interest with drug/nanoparticle in vitro data through a mathematical description of ADME. The application of PBPK models for nanomedicine is in its infancy and characterized by several challenges. The integration of property–distribution relationships in PBPK models may benefit nanomedicine research, giving opportunities for innovative development of nanotechnologies. PBPK modelling has the potential to improve our understanding of the mechanisms underpinning nanoformulation disposition and allow for more rapid and accurate determination of their kinetics. This review provides an overview of the current knowledge of nanomedicine distribution and the use of PBPK modelling in the characterization of nanoformulations with optimal pharmacokinetics. Linked Articles This article is part of a themed section on Nanomedicine. To view the other articles in this section visit http://dx.doi.org/10.1111/bph.2014.171.issue-17 PMID:24467481

  7. An Optimized Informatics Pipeline for Mass Spectrometry-Based Peptidomics

    NASA Astrophysics Data System (ADS)

    Wu, Chaochao; Monroe, Matthew E.; Xu, Zhe; Slysz, Gordon W.; Payne, Samuel H.; Rodland, Karin D.; Liu, Tao; Smith, Richard D.

    2015-12-01

    The comprehensive MS analysis of the peptidome, the intracellular and intercellular products of protein degradation, has the potential to provide novel insights on endogenous proteolytic processing and its utility in disease diagnosis and prognosis. Along with the advances in MS instrumentation and related platforms, a plethora of proteomics data analysis tools have been applied for direct use in peptidomics; however, an evaluation of the currently available informatics pipelines for peptidomics data analysis has yet to be reported. In this study, we began by evaluating the results of several popular MS/MS database search engines, including MS-GF+, SEQUEST, and MS-Align+, for peptidomics data analysis, followed by identification and label-free quantification using the well-established accurate mass and time (AMT) tag and newly developed informed quantification (IQ) approaches, both based on direct LC-MS analysis. Our results demonstrated that MS-GF+ outperformed both SEQUEST and MS-Align+ in identifying peptidome peptides. Using a database established from MS-GF+ peptide identifications, both the AMT tag and IQ approaches provided significantly deeper peptidome coverage and less missing data for each individual data set than the MS/MS methods, while achieving robust label-free quantification. Besides having an excellent correlation with the AMT tag quantification results, IQ also provided slightly higher peptidome coverage. Taken together, we propose an optimized informatics pipeline combining MS-GF+ for initial database searching with IQ (or AMT tag) approaches for identification and label-free quantification for high-throughput, comprehensive, and quantitative peptidomics analysis.

  8. Reliability-based design optimization of multiphysics, aerospace systems

    NASA Astrophysics Data System (ADS)

    Allen, Matthew R.

    Aerospace systems are inherently plagued by uncertainties in their design, fabrication, and operation. Safety factors and expensive testing at the prototype level traditionally account for these uncertainties. Reliability-based design optimization (RBDO) can drastically decrease life-cycle development costs by accounting for the stochastic nature of the system response in the design process. The reduction in cost is amplified for conceptually new designs, for which no accepted safety factors currently exist. Aerospace systems often operate in environments dominated by multiphysics phenomena, such as the fluid-structure interaction of aeroelastic wings or the electrostatic-mechanical interaction of sensors and actuators. The analysis of such phenomena is generally complex and computationally expensive, and therefore is usually simplified or approximated in the design process. However, this leads to significant epistemic uncertainties in modeling, which may dominate the uncertainties for which the reliability analysis was intended. Therefore, the goal of this thesis is to present a RBDO framework that utilizes high-fidelity simulation techniques to minimize the modeling error for multiphysics phenomena. A key component of the framework is an extended reduced order modeling (EROM) technique that can analyze various states in the design or uncertainty parameter space at a reduced computational cost, while retaining characteristics of high-fidelity methods. The computational framework is verified and applied to the RBDO of aeroelastic systems and electrostatically driven sensors and actuators, utilizing steady-state analysis and design criteria. The framework is also applied to the design of electrostatic devices with transient criteria, which requires the use of the EROM technique to overcome the computational burden of multiple transient analyses.

  9. An Optimized Informatics Pipeline for Mass Spectrometry-Based Peptidomics

    SciTech Connect

    Wu, Chaochao; Monroe, Matthew E.; Xu, Zhe; Slysz, Gordon W.; Payne, Samuel H.; Rodland, Karin D.; Liu, Tao; Smith, Richard D.

    2015-12-26

    Comprehensive MS analysis of peptidome, the intracellular and intercellular products of protein degradation, has the potential to provide novel insights on endogenous proteolytic processing and their utility in disease diagnosis and prognosis. Along with the advances in MS instrumentation, a plethora of proteomics data analysis tools have been applied for direct use in peptidomics; however an evaluation of the currently available informatics pipelines for peptidomics data analysis has yet to be reported. In this study, we set off by evaluating the results of several popular MS/MS database search engines including MS-GF+, SEQUEST and MS-Align+ for peptidomics data analysis, followed by identification and label-free quantification using the well-established accurate mass and time (AMT) tag and newly developed informed quantification (IQ) approaches, both based on direct LC-MS analysis. Our result demonstrated that MS-GF+ outperformed both SEQUEST and MS-Align+ in identifying peptidome peptides. Using a database established from the MS-GF+ peptide identifications, both the AMT tag and IQ approaches provided significantly deeper peptidome coverage and less missing value for each individual data set than the MS/MS methods, while achieving robust label-free quantification. Besides having an excellent correlation with the AMT tag quantification results, IQ also provided slightly higher peptidome coverage than AMT. Taken together, we propose an optimal informatics pipeline combining MS-GF+ for initial database searching with IQ (or AMT) for identification and label-free quantification for high-throughput, comprehensive and quantitative peptidomics analysis.

  10. OMIGA: Optimized Maker-Based Insect Genome Annotation.

    PubMed

    Liu, Jinding; Xiao, Huamei; Huang, Shuiqing; Li, Fei

    2014-08-01

    Insects are one of the largest classes of animals on Earth and constitute more than half of all living species. The i5k initiative has begun sequencing of more than 5,000 insect genomes, which should greatly help in exploring insect resource and pest control. Insect genome annotation remains challenging because many insects have high levels of heterozygosity. To improve the quality of insect genome annotation, we developed a pipeline, named Optimized Maker-Based Insect Genome Annotation (OMIGA), to predict protein-coding genes from insect genomes. We first mapped RNA-Seq reads to genomic scaffolds to determine transcribed regions using Bowtie, and the putative transcripts were assembled using Cufflink. We then selected highly reliable transcripts with intact coding sequences to train de novo gene prediction software, including Augustus. The re-trained software was used to predict genes from insect genomes. Exonerate was used to refine gene structure and to determine near exact exon/intron boundary in the genome. Finally, we used the software Maker to integrate data from RNA-Seq, de novo gene prediction, and protein alignment to produce an official gene set. The OMIGA pipeline was used to annotate the draft genome of an important insect pest, Chilo suppressalis, yielding 12,548 genes. Different strategies were compared, which demonstrated that OMIGA had the best performance. In summary, we present a comprehensive pipeline for identifying genes in insect genomes that can be widely used to improve the annotation quality in insects. OMIGA is provided at http://ento.njau.edu.cn/omiga.html . PMID:24609470

  11. Orbit design and optimization based on global telecommunication performance metrics

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.

    2006-01-01

    The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.

  12. Aeroelastic Optimization Study Based on the X-56A Model

    NASA Technical Reports Server (NTRS)

    Li, Wesley W.; Pak, Chan-Gi

    2014-01-01

    One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.

  13. Optimal seat suspension design based on minimum "simulated subjective response".

    PubMed

    Wan, Y; Schimmels, J M

    1997-11-01

    This work addresses a method for improving vertical whole body vibration isolation through optimal seat suspension design. The primary thrusts of this investigation are: (1) the development of a simple model that captures the essential dynamics of a seated human exposed to vertical vibration, (2) the selection and evaluation of several standards for assessing human sensitivity to vertical vibration, and (3) the determination of the seat suspension parameters that minimize these standards to yield optimal vibration isolation. Results show that the optimal seat and cushion damping coefficients depend very much on the selection of the vibration sensitivity standard and on the lower bound of the stiffnesses used in the constrained optimization procedure. In all cases, however, the optimal seat damping obtained here is significantly larger (by than a factor of 10) than that obtained using existing seat suspension design methods or from previous optimal suspension studies. This research also indicates that the existing means of assessing vibration in suspension design (ISO 7096) requires modification. PMID:9407279

  14. Genetic algorithm based optimization of pulse profile for MOPA based high power fiber lasers

    NASA Astrophysics Data System (ADS)

    Zhang, Jiawei; Tang, Ming; Shi, Jun; Fu, Songnian; Li, Lihua; Liu, Ying; Cheng, Xueping; Liu, Jian; Shum, Ping

    2015-03-01

    Although the Master Oscillator Power-Amplifier (MOPA) based fiber laser has received much attention for laser marking process due to its large tunabilty of pulse duration (from 10ns to 1ms), repetition rate (100Hz to 500kHz), high peak power and extraordinary heat dissipating capability, the output pulse deformation due to the saturation effect of fiber amplifier is detrimental for many applications. We proposed and demonstrated that, by utilizing Genetic algorithm (GA) based optimization technique, the input pulse profile from the master oscillator (current-driven laser diode) could be conveniently optimized to achieve targeted output pulse shape according to real parameters' constraints. In this work, an Yb-doped high power fiber amplifier is considered and a 200ns square shaped pulse profile is the optimization target. Since the input pulse with longer leading edge and shorter trailing edge can compensate the saturation effect, linear, quadratic and cubic polynomial functions are used to describe the input pulse with limited number of unknowns(<5). Coefficients of the polynomial functions are the optimization objects. With reasonable cost and hardware limitations, the cubic input pulse with 4 coefficients is found to be the best as the output amplified pulse can achieve excellent flatness within the square shape. Considering the bandwidth constraint of practical electronics, we examined high-frequency component cut-off effect of input pulses and found that the optimized cubic input pulses with 300MHz bandwidth is still quite acceptable to satisfy the requirement for the amplified output pulse and it is feasible to establish such a pulse generator in real applications.

  15. 12 CFR 390.466 - Risk-based capital credit risk-weight categories.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Corporation Improvement Act of 1991 (12 U.S.C. 4401-4407), or Regulation EE (12 CFR part 231). (3) If the... Administration in 13 CFR 121 pursuant to 15 U.S.C. 632. (ii) Capital requirement. Notwithstanding any other... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Risk-based capital credit...

  16. 12 CFR 390.466 - Risk-based capital credit risk-weight categories.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Corporation Improvement Act of 1991 (12 U.S.C. 4401-4407), or Regulation EE (12 CFR part 231). (3) If the... Administration in 13 CFR 121 pursuant to 15 U.S.C. 632. (ii) Capital requirement. Notwithstanding any other... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Risk-based capital credit...

  17. 12 CFR Appendix C to Part 704 - Risk-Based Capital Credit Risk-Weight Categories

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Commission (SEC) and that complies with the SEC's net capital regulations (17 CFR 240.15c3(1)); and (2) A...), or Regulation EE (12 CFR part 231). (C) If the securities firm uses the claim to satisfy its... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Risk-Based Capital Credit...

  18. Risk-based indicators of Canadians’ exposures to environmental carcinogens

    PubMed Central

    2013-01-01

    Background Tools for estimating population exposures to environmental carcinogens are required to support evidence-based policies to reduce chronic exposures and associated cancers. Our objective was to develop indicators of population exposure to selected environmental carcinogens that can be easily updated over time, and allow comparisons and prioritization between different carcinogens and exposure pathways. Methods We employed a risk assessment-based approach to produce screening-level estimates of lifetime excess cancer risk for selected substances listed as known carcinogens by the International Agency for Research on Cancer. Estimates of lifetime average daily intake were calculated using population characteristics combined with concentrations (circa 2006) in outdoor air, indoor air, dust, drinking water, and food and beverages from existing monitoring databases or comprehensive literature reviews. Intake estimates were then multiplied by cancer potency factors from Health Canada, the United States Environmental Protection Agency, and the California Office of Environmental Health Hazard Assessment to estimate lifetime excess cancer risks associated with each substance and exposure pathway. Lifetime excess cancer risks in excess of 1 per million people are identified as potential priorities for further attention. Results Based on data representing average conditions circa 2006, a total of 18 carcinogen-exposure pathways had potential lifetime excess cancer risks greater than 1 per million, based on varying data quality. Carcinogens with moderate to high data quality and lifetime excess cancer risk greater than 1 per million included benzene, 1,3-butadiene and radon in outdoor air; benzene and radon in indoor air; and arsenic and hexavalent chromium in drinking water. Important data gaps were identified for asbestos, hexavalent chromium and diesel exhaust in outdoor and indoor air, while little data were available to assess risk for substances in dust, food

  19. Risk-based evaluation of Allowed Outage Times (AOTs) considering risk of shutdown

    SciTech Connect

    Mankamo, T.; Kim, I.S.; Samanta, P.K.

    1992-12-31

    When safety systems fail during power operation, Technical Specifications (TS) usually limit the repair within Allowed Outage Time (AOT). If the repair cannot be completed within the AOT, or no AOT is allowed, the plant is required to be shut down for the repair. However, if the capability to remove decay heat is degraded, shutting down the plant with the need to operate the affected decay-heat removal systems may impose a substantial risk compared to continued power operation over a usual repair time. Thus, defining a proper AOT in such situations can be considered as a risk-comparison between the repair in frill power state with a temporarily increased level of risk, and the altemative of shutting down the plant for the repair in zero power state with a specific associated risk. The methodology of the risk-comparison approach, with a due consideration of the shutdown risk, has been further developed and applied to the AOT considerations of residual heat removal and standby service water systems of a boiling water reactor (BWR) plant. Based on the completed work, several improvements to the TS requirements for the systems studied can be suggested.

  20. Risk-based evaluation of Allowed Outage Times (AOTs) considering risk of shutdown

    SciTech Connect

    Mankamo, T. ); Kim, I.S.; Samanta, P.K. )

    1992-01-01

    When safety systems fail during power operation, Technical Specifications (TS) usually limit the repair within Allowed Outage Time (AOT). If the repair cannot be completed within the AOT, or no AOT is allowed, the plant is required to be shut down for the repair. However, if the capability to remove decay heat is degraded, shutting down the plant with the need to operate the affected decay-heat removal systems may impose a substantial risk compared to continued power operation over a usual repair time. Thus, defining a proper AOT in such situations can be considered as a risk-comparison between the repair in frill power state with a temporarily increased level of risk, and the altemative of shutting down the plant for the repair in zero power state with a specific associated risk. The methodology of the risk-comparison approach, with a due consideration of the shutdown risk, has been further developed and applied to the AOT considerations of residual heat removal and standby service water systems of a boiling water reactor (BWR) plant. Based on the completed work, several improvements to the TS requirements for the systems studied can be suggested.

  1. Time-based collision risk modeling for air traffic management

    NASA Astrophysics Data System (ADS)

    Bell, Alan E.

    Since the emergence of commercial aviation in the early part of last century, economic forces have driven a steadily increasing demand for air transportation. Increasing density of aircraft operating in a finite volume of airspace is accompanied by a corresponding increase in the risk of collision, and in response to a growing number of incidents and accidents involving collisions between aircraft, governments worldwide have developed air traffic control systems and procedures to mitigate this risk. The objective of any collision risk management system is to project conflicts and provide operators with sufficient opportunity to recognize potential collisions and take necessary actions to avoid them. It is therefore the assertion of this research that the currency of collision risk management is time. Future Air Traffic Management Systems are being designed around the foundational principle of four dimensional trajectory based operations, a method that replaces legacy first-come, first-served sequencing priorities with time-based reservations throughout the airspace system. This research will demonstrate that if aircraft are to be sequenced in four dimensions, they must also be separated in four dimensions. In order to separate aircraft in four dimensions, time must emerge as the primary tool by which air traffic is managed. A functional relationship exists between the time-based performance of aircraft, the interval between aircraft scheduled to cross some three dimensional point in space, and the risk of collision. This research models that relationship and presents two key findings. First, a method is developed by which the ability of an aircraft to meet a required time of arrival may be expressed as a robust standard for both industry and operations. Second, a method by which airspace system capacity may be increased while maintaining an acceptable level of collision risk is presented and demonstrated for the purpose of formulating recommendations for procedures

  2. Optimality criteria-based topology optimization of a bi-material model for acoustic-structural coupled systems

    NASA Astrophysics Data System (ADS)

    Shang, Linyuan; Zhao, Guozhong

    2016-06-01

    This article investigates topology optimization of a bi-material model for acoustic-structural coupled systems. The design variables are volume fractions of inclusion material in a bi-material model constructed by the microstructure-based design domain method (MDDM). The design objective is the minimization of sound pressure level (SPL) in an interior acoustic medium. Sensitivities of SPL with respect to topological design variables are derived concretely by the adjoint method. A relaxed form of optimality criteria (OC) is developed for solving the acoustic-structural coupled optimization problem to find the optimum bi-material distribution. Based on OC and the adjoint method, a topology optimization method to deal with large calculations in acoustic-structural coupled problems is proposed. Numerical examples are given to illustrate the applications of topology optimization for a bi-material plate under a low single-frequency excitation and an aerospace structure under a low frequency-band excitation, and to prove the efficiency of the adjoint method and the relaxed form of OC.

  3. A School-Based Suicide Risk Assessment Protocol

    ERIC Educational Resources Information Center

    Boccio, Dana E.

    2015-01-01

    Suicide remains the third leading cause of death among young people in the United States. Considering that youth who contemplate suicide generally exhibit warning signs before engaging in lethal self-harm, school-based mental health professionals can play a vital role in identifying students who are at risk for suicidal behavior. Nevertheless, the…

  4. Research needs for risk-informed, performance-based regulation

    SciTech Connect

    Cloninger, T.H.

    1997-01-01

    This presentation was made by an executive in the utility which operates the South Texas Project reactors, and summarizes their perspective on probabilistic safety analysis, risk-based operation, and risk-based regulation. They view it as a tool to help them better apply their resources to maintain the level of safety necessary to protect the public health and safety. South Texas served as one of the pilot plants for the application of risk-based regulation to the maintenance rule. The author feels that the process presents opportunities as well as challenges. Among the opportunities is the involvement of more people in the process, and the sense of investment they take in the decisions, in addition to the insight they can offer. In the area of challenges there is the need for better understanding of how to apply what already is known on problems, rather than essentially reinventing the wheel to address problems. Research is needed to better understand when some events are not truly of a significant safety concern. The demarcation between deterministic decisions and the appropriate application of risk-based decisions must be better defined, for the sake of the operator as well as the public observing plant operation.

  5. How Should Risk-Based Regulation Reflect Current Public Opinion?

    PubMed

    Pollock, Christopher John

    2016-08-01

    Risk-based regulation of novel agricultural products with public choice manifest via traceability and labelling is a more effective approach than the use of regulatory processes to reflect public concerns, which may not always be supported by evidence. PMID:27266813

  6. Mindfulness Based Programs Implemented with At-Risk Adolescents

    PubMed Central

    Rawlett, Kristen; Scrandis, Debra

    2016-01-01

    Objective: This review examines studies on mindfulness based programs used with adolescents at-risk for poor future outcomes such as not graduating from high school and living in poverty. Method: The keywords used include mindfulness, at-risk and adolescents in each database to search CINAHL (10 items: 2 book reviews, 3 Dissertations, and 5 research articles), Medline EBSCO (15 research articles), and PubMed (10 research articles). Only primary research articles published between 2009- 2015 in English on mindfulness and at-risk adolescents were included for the most current evidence. Results: Few studies (n= 11) were found that investigate mindfulness in at-risk adolescents. These studies used various mindfulness programs (n = 7) making it difficult to generalize findings for practice. Only three studies were randomized control trials focusing mostly on male students with low socioeconomic status and existing mental health diagnoses. Conclusion: There is a relationship between health behaviors and academic achievement. Future research studies on mindfulness based interventions need to expand to its effects on academic achievement in those youth at-risk to decrease problematic behaviors and improve their ability to be successful adults. PMID:27347259

  7. Agreement in cardiovascular risk rating based on anthropometric parameters

    PubMed Central

    Dantas, Endilly Maria da Silva; Pinto, Cristiane Jordânia; Freitas, Rodrigo Pegado de Abreu; de Medeiros, Anna Cecília Queiroz

    2015-01-01

    Objective To investigate the agreement in evaluation of risk of developing cardiovascular diseases based on anthropometric parameters in young adults. Methods The study included 406 students, measuring weight, height, and waist and neck circumferences. Waist-to-height ratio and the conicity index. The kappa coefficient was used to assess agreement in risk classification for cardiovascular diseases. The positive and negative specific agreement values were calculated as well. The Pearson chi-square (χ2) test was used to assess associations between categorical variables (p<0.05). Results The majority of the parameters assessed (44%) showed slight (k=0.21 to 0.40) and/or poor agreement (k<0.20), with low values of negative specific agreement. The best agreement was observed between waist circumference and waist-to-height ratio both for the general population (k=0.88) and between sexes (k=0.93 to 0.86). There was a significant association (p<0.001) between the risk of cardiovascular diseases and females when using waist circumference and conicity index, and with males when using neck circumference. This resulted in a wide variation in the prevalence of cardiovascular disease risk (5.5%-36.5%), depending on the parameter and the sex that was assessed. Conclusion The results indicate variability in agreement in assessing risk for cardiovascular diseases, based on anthropometric parameters, and which also seems to be influenced by sex. Further studies in the Brazilian population are required to better understand this issue. PMID:26466060

  8. Finding Risk Groups by Optimizing Artificial Neural Networks on the Area under the Survival Curve Using Genetic Algorithms

    PubMed Central

    Kalderstam, Jonas; Edén, Patrik; Ohlsson, Mattias

    2015-01-01

    We investigate a new method to place patients into risk groups in censored survival data. Properties such as median survival time, and end survival rate, are implicitly improved by optimizing the area under the survival curve. Artificial neural networks (ANN) are trained to either maximize or minimize this area using a genetic algorithm, and combined into an ensemble to predict one of low, intermediate, or high risk groups. Estimated patient risk can influence treatment choices, and is important for study stratification. A common approach is to sort the patients according to a prognostic index and then group them along the quartile limits. The Cox proportional hazards model (Cox) is one example of this approach. Another method of doing risk grouping is recursive partitioning (Rpart), which constructs a decision tree where each branch point maximizes the statistical separation between the groups. ANN, Cox, and Rpart are compared on five publicly available data sets with varying properties. Cross-validation, as well as separate test sets, are used to validate the models. Results on the test sets show comparable performance, except for the smallest data set where Rpart’s predicted risk groups turn out to be inverted, an example of crossing survival curves. Cross-validation shows that all three models exhibit crossing of some survival curves on this small data set but that the ANN model manages the best separation of groups in terms of median survival time before such crossings. The conclusion is that optimizing the area under the survival curve is a viable approach to identify risk groups. Training ANNs to optimize this area combines two key strengths from both prognostic indices and Rpart. First, a desired minimum group size can be specified, as for a prognostic index. Second, the ability to utilize non-linear effects among the covariates, which Rpart is also able to do. PMID:26352405

  9. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    1988-01-01

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimizatin method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  10. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Astrophysics Data System (ADS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimizatin method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  11. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  12. A comprehensive propagation prediction model comprising microfacet based scattering and probability based coverage optimization algorithm.

    PubMed

    Kausar, A S M Zahid; Reza, Ahmed Wasif; Wo, Lau Chun; Ramiah, Harikrishnan

    2014-01-01

    Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results. PMID:25202733

  13. Equilibrium strategy-based optimization method for the coal-water conflict: A perspective from China.

    PubMed

    Xu, Jiuping; Lv, Chengwei; Zhang, Mengxiang; Yao, Liming; Zeng, Ziqiang

    2015-09-01

    Environmental water problems have become increasingly severe, with the coal-water conflict becoming one of the most difficult issues in large scale coal mining regions. In this paper, a bi-level optimization model based on the Stackelberg-Nash equilibrium strategy with fuzzy coefficients is developed to deal with environmental water problems in large scale coal fields, in which both the groundwater quality and quantity are considered. Using the proposed model, and fully considering the relationship between the authority and the collieries and also the equilibrium between economic development and environmental protection, an environmental protection based mining quotas competition mechanism is established. To deal with the inherent uncertainties, the model is defuzzified using a possibility measure, and a solution approach based on the Karush-Kuhn-Tucker condition is designed to search for the solutions. A case study is presented to demonstrate the practicality and efficiency of the model, and different constraint violation risk levels and related results are also obtained. The results showed that under the environmental protection based mining quotas competition mechanism, collieries attempt to conduct environmentally friendly exploitation to seek greater mining quotas. This demonstrates the practicality and efficiency in the proposed model of reducing the coal-water conflict. Finally, a comprehensive discussion is provided and some propositions is given as a foundation for the proposed management recommendations. PMID:26144559

  14. A Study on the Optimal Generation Mix Based on Portfolio Theory with Considering the Basic Condition for Power Supply

    NASA Astrophysics Data System (ADS)

    Kato, Moritoshi; Zhou, Yicheng

    This paper presents a novel method to analyze the optimal generation mix based on portfolio theory with considering the basic condition for power supply, which means that electricity generation corresponds with load curve. The optimization of portfolio is integrated with the calculation of a capacity factor of each generation in order to satisfy the basic condition for power supply. Besides, each generation is considered to be an asset, and risks of the generation asset both in its operation period and construction period are considered. Environmental measures are evaluated through restriction of CO2 emissions, which are indicated by CO2 price. Numerical examples show the optimal generation mix according to risks such as the deviation of capacity factor of nuclear power or restriction of CO2 emissions, the possibility of introduction of clean coal technology (IGCC, CCS) or renewable energy, and so on. The results of this work will be possibly applied as setting the target of the generation mix for the future according to prospects of risks of each generation and restrictions of CO2 emissions.

  15. Risk-based monitored natural attenuation--a case study.

    PubMed

    Khan, F I; Husain, T

    2001-08-17

    The term "monitored natural attenuation" (MNA) refers to a reliance on natural attenuation (NA) processes for remediation through the careful monitoring of the behavior of a contaminant source in time and space domains. In recent years, policymakers are shifting to a risk-based approach where site characteristics are measured against the potential risk to human health and the environment, and site management strategies are prioritized to be commensurate with that risk. Risk-based corrective action (RBCA), a concept developed by the American Society for Testing Materials (ASTM), was the first indication of how this approach could be used in the development of remediation strategies. This paper, which links ASTM's RBCA approach with MNA, develops a systematic working methodology for a risk-based site evaluation and remediation through NA. The methodology is comprised of seven steps, with the first five steps intended to evaluate site characteristics and the feasibility of NA. If NA is effective, then the last two steps will guide the development of a long-term monitoring plan and approval for a site closure. This methodology is used to evaluate a site contaminated with oil from a pipeline spill. The case study concluded that the site has the requisite characteristics for NA, but it would take more than 80 years for attenuation of xylene and ethylbenzene, as these chemicals appear in the pure phase. If fast remediation is sought, then efforts should be made to remove the contaminant from the soil. Initially, the site posed a serious risk to both on-site and off-site receptors, but it becomes acceptable after 20 years, as the plume is diluted and drifts from its source of origin. PMID:11489527

  16. Projected Flood Risks in China based on CMIP5

    NASA Astrophysics Data System (ADS)

    Xu, Ying

    2016-04-01

    Based on the simulations from 22 CMIP5 models and in combination with data on population, GDP, arable land, and terrain elevation, the spatial distributions of the flood risk levels are calculated and analyzed under RCP8.5 for the baseline period (1986-2005), the near term future period (2016-2035), the middle term future period (2046-2065), and the long term future period (2080-2099). (1) Areas with higher flood hazard risk levels in the future are concentrated in southeastern China, and the areas with the risk level III continue to expand. The major changes in flood hazard risks will occur in the middle and long term future. (2) In future, the areas of high vulnerability to flood hazards will be located in China's eastern region. In the middle and late 21st century, the extent of the high vulnerability area will expand eastward and its intensity will gradually increase. The highest vulnerability values are found in the provinces of Beijing, Tianjin, Hebei, Henan, Anhui, Shandong, Shanghai, Jiangsu, and in parts of the Pearl River Delta. Furthermore, the major cities in northeast China, as well as Wuhan, Changsha and Nanchang are highly vulnerable. (3) The regions with high flood risk levels will be located in eastern China, in the middle and lower reaches of Yangtze River and stretching northward to Beijing and Tianjin. High-risk flood areas are also occurring in major cities in Northeast China, in some parts of Shaanxi and Shanxi, and in some coastal areas in Southeast China. (4) Compared to the baseline period, the high flood risks will increase on a regional level towards the end of the 21st century, although the areas of flood hazards show little variation. In this paper, the projected future flood risks for different periods were analyzed under the RCP8.5 emission scenarios. By comparing the results with the simulations under the RCP 2.6 and RCP 4.5 scenarios, both scenarios show no differences in the spatial distribution, but in the intensity of flood

  17. Familial risk of cerebral palsy: population based cohort study

    PubMed Central

    Wilcox, Allen J; Lie, Rolv T; Moster, Dag

    2014-01-01

    Objective To investigate risks of recurrence of cerebral palsy in family members with various degrees of relatedness to elucidate patterns of hereditability. Design Population based cohort study. Setting Data from the Medical Birth Registry of Norway, linked to the Norwegian social insurance scheme to identify cases of cerebral palsy and to databases of Statistics Norway to identify relatives. Participants 2 036 741 Norwegians born during 1967-2002, 3649 of whom had a diagnosis of cerebral palsy; 22 558 pairs of twins, 1 851 144 pairs of first degree relatives, 1 699 856 pairs of second degree relatives, and 5 165 968 pairs of third degree relatives were identified. Main outcome measure Cerebral palsy. Results If one twin had cerebral palsy, the relative risk of recurrence of cerebral palsy was 15.6 (95% confidence interval 9.8 to 25) in the other twin. In families with an affected singleton child, risk was increased 9.2 (6.4 to 13)-fold in a subsequent full sibling and 3.0 (1.1 to 8.6)-fold in a half sibling. Affected parents were also at increased risk of having an affected child (6.5 (1.6 to 26)-fold). No evidence was found of differential transmission through mothers or fathers, although the study had limited power to detect such differences. For people with an affected first cousin, only weak evidence existed for an increased risk (1.5 (0.9 to 2.7)-fold). Risks in siblings or cousins were independent of sex of the index case. After exclusion of preterm births (an important risk factor for cerebral palsy), familial risks remained and were often stronger. Conclusions People born into families in which someone already has cerebral palsy are themselves at elevated risk, depending on their degree of relatedness. Elevated risk may extend even to third degree relatives (first cousins). The patterns of risk suggest multifactorial inheritance, in which multiple genes interact with each other and with environmental factors. These data offer additional

  18. 12 CFR 1022.73 - Content, form, and timing of risk-based pricing notices.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 8 2014-01-01 2014-01-01 false Content, form, and timing of risk-based pricing... REPORTING (REGULATION V) Duties of Users Regarding Risk-Based Pricing § 1022.73 Content, form, and timing of risk-based pricing notices. (a) Content of the notice—(1) In general. The risk-based pricing...

  19. 12 CFR 652.90 - How to report your risk-based capital determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 7 2013-01-01 2013-01-01 false How to report your risk-based capital... report your risk-based capital determination. (a) Your risk-based capital report must contain at least... in written instructions to you. (b) You must submit each risk-based capital report in such format...

  20. 12 CFR 652.90 - How to report your risk-based capital determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false How to report your risk-based capital... report your risk-based capital determination. (a) Your risk-based capital report must contain at least... in written instructions to you. (b) You must submit each risk-based capital report in such format...

  1. 12 CFR 652.90 - How to report your risk-based capital determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 7 2012-01-01 2012-01-01 false How to report your risk-based capital... report your risk-based capital determination. (a) Your risk-based capital report must contain at least... in written instructions to you. (b) You must submit each risk-based capital report in such format...

  2. 12 CFR 652.90 - How to report your risk-based capital determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false How to report your risk-based capital... report your risk-based capital determination. (a) Your risk-based capital report must contain at least... in written instructions to you. (b) You must submit each risk-based capital report in such format...

  3. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and...

  4. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and...

  5. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and...

  6. 12 CFR 652.65 - Risk-based capital stress test.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Risk-based capital stress test. 652.65 Section... CORPORATION FUNDING AND FISCAL AFFAIRS Risk-Based Capital Requirements § 652.65 Risk-based capital stress test. You will perform the risk-based capital stress test as described in summary form below and...

  7. Development of a GCR Event-based Risk Model

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Ponomarev, Artem L.; Plante, Ianik; Carra, Claudio; Kim, Myung-Hee

    2009-01-01

    A goal at NASA is to develop event-based systems biology models of space radiation risks that will replace the current dose-based empirical models. Complex and varied biochemical signaling processes transmit the initial DNA and oxidative damage from space radiation into cellular and tissue responses. Mis-repaired damage or aberrant signals can lead to genomic instability, persistent oxidative stress or inflammation, which are causative of cancer and CNS risks. Protective signaling through adaptive responses or cell repopulation is also possible. We are developing a computational simulation approach to galactic cosmic ray (GCR) effects that is based on biological events rather than average quantities such as dose, fluence, or dose equivalent. The goal of the GCR Event-based Risk Model (GERMcode) is to provide a simulation tool to describe and integrate physical and biological events into stochastic models of space radiation risks. We used the quantum multiple scattering model of heavy ion fragmentation (QMSFRG) and well known energy loss processes to develop a stochastic Monte-Carlo based model of GCR transport in spacecraft shielding and tissue. We validated the accuracy of the model by comparing to physical data from the NASA Space Radiation Laboratory (NSRL). Our simulation approach allows us to time-tag each GCR proton or heavy ion interaction in tissue including correlated secondary ions often of high multiplicity. Conventional space radiation risk assessment employs average quantities, and assumes linearity and additivity of responses over the complete range of GCR charge and energies. To investigate possible deviations from these assumptions, we studied several biological response pathway models of varying induction and relaxation times including the ATM, TGF -Smad, and WNT signaling pathways. We then considered small volumes of interacting cells and the time-dependent biophysical events that the GCR would produce within these tissue volumes to estimate how

  8. Transport path optimization algorithm based on fuzzy integrated weights

    NASA Astrophysics Data System (ADS)

    Hou, Yuan-Da; Xu, Xiao-Hao

    2014-11-01

    Natural disasters cause significant damage to roads, making route selection a complicated logistical problem. To overcome this complexity, we present a method of using a trapezoidal fuzzy number to select the optimal transport path. Using the given trapezoidal fuzzy edge coefficients, we calculate a fuzzy integrated matrix, and incorporate the fuzzy multi-weights into fuzzy integrated weights. The optimal path is determined by taking two sets of vertices and transforming undiscovered vertices into discoverable ones. Our experimental results show that the model is highly accurate, and requires only a few measurement data to confirm the optimal path. The model provides an effective, feasible, and convenient method to obtain weights for different road sections, and can be applied to road planning in intelligent transportation systems.

  9. Optimized Hypergraph Clustering-based Network Security Log Mining*

    NASA Astrophysics Data System (ADS)

    Che, Jianhua; Lin, Weimin; Yu, Yong; Yao, Wei

    With network's growth and popularization, network security experts are facing bigger and bigger network security log. Network security log is a kind of valuable and important information recording various network behaviors, and has the features of large-scale and high dimension. Therefore, how to analyze these network security log to enhance the security of network becomes the focus of many researchers. In this paper, we first design a frequent attack sequencebased hypergraph clustering algorithm to mine the network security log, and then improve this algorithm with a synthetic measure of hyperedge weight and two optimization functions of clustering result. The experimental results show that the synthetic measure and optimization functions can promote significantly the coverage and precision of clustering result. The optimized hypergraph clustering algorithm provides a data analyzing method for intrusion detecting and active forewarning of network.

  10. Optical transfer function optimization based on linear expansions

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2015-09-01

    The Optical Transfer Function (OTF) and its modulus the Modulation Transfer Function (MTF) are metrics of optical system performance. However in system optimization, calculation times for the OTF are often substantially longer than more traditional optimization targets such as wavefront error or transverse ray error. The OTF is typically calculated as either the autocorrelation of the complex pupil function or as the Fourier transform of the Point Spread Function. We recently demonstrated that the on-axis OTF can be represented as a linear combination of analytical functions where the weighting terms are directly related to the wavefront error coefficients and apodization of the complex pupil function. Here, we extend this technique to the off-axis case. The expansion technique offers a potential for accelerating OTF optimization in lens design, as well as insight into the interaction of aberrations with components of the OTF.

  11. Novel multireceiver communication systems configurations based on optimal estimation theory

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1992-01-01

    A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.

  12. Optimal Design of Pipeline Based on the Shortest Path

    NASA Astrophysics Data System (ADS)

    Chu, Fei-xue; Chen, Shi-yi

    Design and operation of long-distance pipeline are complex engineering tasks. Even small improvement in the design of a pipeline system can lead to substantial savings in capital. In this paper, graph theory was used to analyze the problem of pipeline optimal design. The candidate pump station locations were taken as the vertexes and the total cost of the pipeline system between the two vertexes corresponded to the edge weight. An algorithm recursively calling the Dijkstra algorithm was designed and analyzed to obtain N shortest paths. The optimal process program and the quasi-optimal process programs were obtained at the same time, which could be used in decision-making. The algorithm was tested by a real example. The result showed that it could meet the need of real application.

  13. Risk management and maintenance optimization of nuclear reactor cooling piping system

    NASA Astrophysics Data System (ADS)

    Augé, L.; Capra, B.; Lasne, M.; Bernard, O.; Bénéfice, P.; Comby, R.

    2006-11-01

    Seaside nuclear power plants have to face the ageing of nuclear reactor cooling piping systems. In order to minimize the duration of the production unit shutdown, maintenance operations have to be planned well in advance. In a context where owners of infrastructures tend to extend the life span of their goods while having to keep the safety level maximum, Oxand brings its expertise and know-how in management of infrastructures life cycle. A dedicated methodology relies on several modules that all participate in fixing network optimum replacement dates: expertise on ageing mechanisms (corrosion, cement degradation...) and the associated kinetics, expertise on impacts of ageing on functional integrity of piping systems, predictive simulation based on experience feedback, development of monitoring techniques focused on actual threats. More precisely, Oxand has designed a patented monitoring technique based on optic fiber sensors, which aims at controlling the deterioration level of piping systems. This preventive maintenance enables the owner to determine criteria for network replacement based on degradation impacts. This approach helps the owner justify his maintenance strategy and allows him to demonstrate the management of safety level. More generally, all monitoring techniques used by the owners are developed and coupled to predictive simulation tools, notably thanks to processes based on Bayesian approaches. Methodologies to evaluate and optimize operation budgets, depending on predictions of future functional deterioration and available maintenance solutions are also developed and applied. Finally, all information related to infrastructure ageing and available maintenance options are put together to reach the right solution for safe and performing infrastructure management.

  14. Lessons in risk- versus resilience-based design and management.

    PubMed

    Park, Jeryang; Seager, Thomas P; Rao, P Suresh C

    2011-07-01

    The implications of recent catastrophic disasters, including the Fukushima Daiichi nuclear power plant accident, reach well beyond the immediate, direct environmental and human health risks. In a complex coupled system, disruptions from natural disasters and man-made accidents can quickly propagate through a complex chain of networks to cause unpredictable failures in other economic or social networks and other parts of the world. Recent disasters have revealed the inadequacy of a classical risk management approach. This study calls for a new resilience-based design and management paradigm that draws upon the ecological analogues of diversity and adaptation in response to low-probability and high-consequence disruptions. PMID:21608108

  15. Overlay improvement by exposure map based mask registration optimization

    NASA Astrophysics Data System (ADS)

    Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric

    2015-03-01

    Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density

  16. Random search optimization based on genetic algorithm and discriminant function

    NASA Technical Reports Server (NTRS)

    Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.

    1990-01-01

    The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.

  17. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  18. Water risk assessment for river basins in China based on WWF water risk assessment tools

    NASA Astrophysics Data System (ADS)

    Wei, N.; Qiu, Y.; Gan, H.; Niu, C.; Liu, J.; Gan, Y.; Zhou, N.

    2014-09-01

    Water resource problems, one of the most important environmental and socio-economic issues, have been a common concern worldwide in recent years. Water resource risks are attracting more and more attention from the international community and national governments. Given the current situations of water resources and the water environment, and the characteristics of water resources management and information statistics of China, this paper establishes an index system for water risk assessment in river basins of China based on the index system of water risk assessment proposed by the World Wide Fund For Nature (WWF) and German Investment and Development Co., Ltd (DEG). The new system is more suitable for Chinese national conditions and endorses the international assessment index. A variety of factors are considered to determine the critical values of classification for each index, and the indexes are graded by means of 5-grade and 5-score scales; the weights and calculation methods of some indexes are adjusted, with the remaining indexes adopting the method of WWF. The Weighted Comprehensive Index Summation Process is adopted to calculate the integrated assessment score of the river basin. The method is applied to the Haihe River basin in China. The assessment shows that the method can accurately reflect the water risk level of different river basins. Finally, the paper discusses the continuing problems in water risk assessment and points out the research required to provide a reference for further study in this field.

  19. [Dyslipidaemia and vascular risk. A new evidence based review].

    PubMed

    Pallarés-Carratalá, V; Pascual-Fuster, V; Godoy-Rocatí, D

    2015-01-01

    Dyslipidaemia is one of the major risk factors for ischaemic heart disease, the leading cause of death worldwide. Early detection and therapeutic intervention are key elements in the adequate prevention of cardiovascular disease. It is essential to have knowledge of the therapeutic arsenal available for their appropriate use in each of the clinical situations that might be presented in our patients. In the past 3 years, there has been a proliferation of multiple guidelines for the clinical management of patients with dyslipidaemia, with apparent contradictory messages regarding the achievement of the control objectives, which are confusing clinicians. This review aims to provide an updated overview of the situation as regards dyslipidaemia, based on the positioning of both European and American guidelines, through different risk situations and ending with the concept of atherogenic dyslipidaemia as a recognized cardiovascular risk factor. PMID:25559484

  20. Efficiency Mode of Energy Management based on Optimal Flight Path

    NASA Astrophysics Data System (ADS)

    Yang, Ling-xiao

    2016-07-01

    One new method of searching the optimal flight path in target function is put forward, which is applied to energy section for reentry flight vehicle, and the optimal flight path in which the energy is managed to decline rapidly, is settled by this design. The research for energy management is meaningful for engineering, it can also improve the applicability and flexibility for vehicle. The angle-of-attack and the bank angle are used to regulate energy and range at unpowered reentry flight as control variables. Firstly, the angle-of-attack section for minimum lift-to-drag ratio is ensured by the relation of range and lift-to-drag ratio. Secondly, build the secure boundary for flight corridor by restrictions in flight. Thirdly, the D-e section is optimized for energy expending in corridor by the influencing rule of the D-e section and range. Finally, compare this design method with the traditional Pseudo-spectral method. Moreover, energy-managing is achieved by cooperating lateral motion, and the optimized D-e section is tracked to prove the practicability of programming flight path with energy management.